Virtual three-dimensional models of buildings and cities are used in an increasing number of situations, including systems such as driving and flying simulators and first responder training, internet maps, games, movies and even architectural and engineering walkthroughs. In each of these virtual environments, small buildings or dwellings can make up a huge proportion of the objects encountered and the character of these buildings sets the look, identity and locale for the simulation. However, with thousands of such buildings in the background, developers of virtual cities can have difficulty populating their world models with realistic buildings in addition to the specific locations that form the focus of the simulation.
By integrating machine learning techniques, Lubin Fan and Peter Wonka from the University’s Visual Computing Center have developed a method to automatically complete and generate three-dimensional building models characteristic of a given area by using partial images.
“Machine learning allows us to take data for existing buildings and learn the ‘look’ of those buildings, which then allows us to synthesize new buildings,” said Wonka. “This technique is used widely in visual computing for objects such as humans and airplanes, but buildings and their structural variations are more difficult to learn. The core of our work was to come up with a meaningful set of features, parameters and their relationships that can describe buildings generically.”
Read the full article