Posts tagged with "3D Modeling":

Placeholder Alt Text

Aesthetic of Prosthetics compares computer-enhanced design practices

How has contemporary architecture and culture been shaped by our access to digital tools, technologies, and computational devices? This was the central question of Aesthetics of Prosthetics, the Pratt Institute Department of Architecture’s first alumni-run exhibition curated by recent alumni and current students Ceren Arslan, Alican Taylan, Can Imamoglu and Irmak Ciftci. The exhibition, which closed last week, took place at Siegel Gallery in Brooklyn. The curatorial team, made up of current students and recent alumni, staged an open call for submissions that addressed the ubiquity of “prosthetic intelligence” in how we interact with and design the built environment. “We define prosthetic intelligence as any device or tool that enhances our mental environment as opposed to our physical environment," read the curatorial statement. "Here is the simplest everyday example: When at a restaurant with friends, you reach out to your smartphone to do an online search for a reference to further the conversation, you use prosthetic intelligence." As none of the works shown have actually been built, the pieces experimented with the possibilities for representation and fabrication that “prosthetic intelligence” allows. The selected submissions used a range of technologies and methods including photography, digital collage, AI technology, digital modeling, and virtual reality The abundant access to data and its role in shaping architecture and aesthetics was a pervasive theme among the show's participants. Ceren Arslan's Los Angeles, for instance, used photo collage and editing to compile internet-sourced images that create an imaginary, yet believable streetscape. Others speculated about data visualization when drawings are increasingly expected to be read by not only humans, but machines and AI intelligence, as in Brandon Wetzel's deep data drawing.

"The work shown at the exhibition, rather than serving as a speculative criticism pointing out towards a techno-fetishist paradigm, tries to act as recording device to capture a moment in architectural discourse. Both the excitement and skepticism around the presented methodologies are due to the fact that they are yet to come to fruition as built projects," said the curators in a statement. 

Placeholder Alt Text

Architect creates app to change how exhibitions are designed

For all the advances in technology over the past decade, the experience of curating and viewing museum shows has remained relatively unchanged. Even though digital archive systems exist and have certainly helped bring old institutions into the present, they have relatively little influence over the ways museum shows are designed and shared. The normal practice is more or less “old school” and even borderline “dysfunctional,” said Bika Rebek, principal of the New York and Vienna–based firm Some Place Studio. In fact, a survey she conducted early on found that many of the different software suites that museum professionals were using were major time sinks for their jobs. Fifty percent said they felt they were “wasting time” trying to fill in data or prepare presentations for design teams. To Rebek, this is very much an architectural problem, or at least a problem architects can solve. She has been working over the past two years, supported by NEW INC and the Knight Foundation, to develop Tools for Show, an interactive web-based application for designing and exploring exhibitions at various scales—from the level of a vitrine to a multi-floor museum. Leveraging her experiences as an architect, 3D graphics expert, and exhibition designer (she’s worked on major shows for the Met and Met Breuer, including the OMA-led design for the 2016 Costume Institute exhibition Manus x Machina), Rebek began developing a web-based application to enable exhibition designers and curators to collaborate, and to empower new ways of engaging with cultural material for users anywhere. Currently, institutions use many different gallery tools, she explained, which don’t necessarily interact and don’t usually let curators think spatially in a straightforward way. Tools for Show allows users to import all sorts of information and metadata from existing collection management software (or enter it anew), which is attached to artworks stored in a library that can then be dragged and dropped into a 3D environment at scale. Paintings and simple 3D shapes are automatically generated, though, for more complex forms where the image projected onto a form of a similar footprint isn’t enough, users could create their own models.  For example, to produce the New Museum’s 2017 show Trigger: Gender as a Tool and a Weapon, Rebek rendered the space and included many of the basic furnishings unique to the museum. For other projects, like a test case with the Louvre's sculptures, she found free-to-use models and 3D scans online. Users can drag these objects across the 3D environments and access in-depth information about them with just a click. With quick visual results and Google Docs-style automatic updates for collaboration, Tools for Show could help not just replace more cumbersome content management systems, but endless emails too. Rebek sees Tools for Show as having many potential uses. It can be used to produce shows, allowing curators to collaboratively and easily design and re-design their exhibitions, and, after the show comes down it can serve as an archive. It can also be its own presentation system—not only allowing “visitors” from across the globe to see shows they might otherwise be unable to see, but also creating new interactive exhibitions or even just vitrines, something she’s been testing out with Miami’s Vizcaya Museum and Gardens. More than just making work easier for curators and designers, Tools for Show could possibly give a degree of curatorial power and play over to a broader audience. “[Tools for Show] could give all people the ability to curate their own show without any technical knowledge,” she explained. And, after all, you can't move around archival materials IRL, so why not on an iPad? While some of the curator-focused features of Tools for Show are in the testing phase, institutions can already request the new display tools like those shown at Vizcaya. Rebek, as a faculty member at Columbia University's Graduate School of Architecture, Planning, and Preservation, has also worked with students to use Tools for Show in conjunction with photogrammetry techniques in an effort to develop new display methods for otherwise inaccessible parts of the Intrepid Sea, Air, and Space Museum, a history and naval and aerospace museum located in a decommissioned aircraft carrier floating in the Hudson River. At a recent critique, museum curators were invited to see the students’ new proposals and explore the spatial visualizations of the museum through interactive 3D models, AR, VR, as well as in-browser and mobile tools that included all sorts of additional media and information.
Placeholder Alt Text

How Baidu Maps turns location data into 3-D cityscapes—and big profits

Level 3, number 203. Turn right 10 feet. Go straight for 15 feet. The best way to experience data's strong grip on everyday life in China is to open up Baidu Maps, a mapping app by China’s biggest search engine company, and walk around a shopping mall for one afternoon. Inside the building, a network of Bluetooth beacons, Wi-Fi modems, and satellites from a global navigation satellite system whir and ping through the air and the ionosphere to determine your precise location. The map on the Baidu app tilts to reveal an elaborately modeled 3-D cityscape.

The resolution of Baidu Maps is stunning: Entire cities are modeled in 3-D. Within public buildings, the floorplan of each building level is precisely mapped. As I stand inside the Taikoo Hui Mall in the city of Guangzhou, China, I search for a store within the mall. Baidu Maps reveals which level the store is on and how many meters I need to walk. Strolling through the mall with the app tracking my location with a blue dot on the screen, life starts to feel like a virtual reality experience. The difference between the map's 3-D model and the reality beneath my feet is smaller than ever. The 3-D model makes an uncanny loop: Virtual models were used by architects and designers to design these spaces, which now unfold on a messy plane between real space and screen space.

China now has its own tech giants—Alibaba, JD.com, Tencent Holdings, and Baidu—homegrown behind the Great Firewall of China. Like their American counterparts, these companies have managed to surveil their users and extract valuable data to create new products and features. Baidu began as a search engine, but has now branched out into autonomous driving, and therefore, maps. The intricacy of its 3-D visualizations is the result of over 600 million users consulting the app for navigation every day or using apps that rely on Baidu Maps in the background, such as weather apps that rely on its geolocation features.

The tech company, like its counterparts such as Google, take advantage of multiple features available in smartphones. Smartphones possess the ability to determine users’ positions by communicating with an array of satellites such as GPS (Global Positioning Service); GLONASS, Russia’s version of GPS; or BeiDou, China’s satellite navigation system. Such satellite systems are public infrastructures created by American, Russian, and Chinese governments, respectively, that enable our phones to determine users’ precise longitude and latitude coordinates. The majority of apps and services on smartphones rely on location services, from food delivery to restaurant reviews. However, satellite navigation systems are still imprecise—they are often a few meters off, with anything from the weather to tall buildings affecting accuracy.

However, smartphones contain more than satellite signal receiver chips. A slew of other sensors, such as accelerometers, light sensors, and magnets are embedded in the average smartphone. In 2015, Baidu invested $10 million in IndoorAtlas, a Silicon Valley startup that specializes in indoor mapping. The company's technology is at the forefront of magnetic positioning, which allows indoor maps at 1-meter accuracy to be created simply by using an average smartphone. This technology relies on the Earth's geomagnetic field and the magnets in smartphones. By factoring in the unique magnetic "fingerprint" of each building based on the composition of its materials, such as steel, a building's floor plan can be mapped out without any data provided by the architect. However, this strategy requires user data at scale; multiple user paths need to be recorded and averaged out to account for any anomalies. Gathering large amounts of data from users becomes an imperative.

Floorplans aside, magnetic positioning is not the only dimension of user location data collection that allows data to become a spatial model. As people drive, bike, and walk, each user generates a spatial "trace" that also has velocity data attached to it. Through such data, information about the type of path can be derived: Is it a street, a sidewalk, or a highway? This information becomes increasingly useful in improving the accuracy of Baidu Maps itself, as well as Baidu's autonomous vehicle projects.

The detailed 3-D city models on Baidu Maps offer data that urban designers dream of, but such models only serve Baidu's interests. Satellite navigation system accuracy deteriorates in urban canyons, due to skyscrapers and building density, obscuring satellites from the receiver chip. These inaccuracies are problematic for autonomous vehicles, given the "safety critical" nature of self-driving cars. Baidu's 3-D maps are not just an aesthetic “wow factor” but also a feature that addresses positioning inaccuracies. By using 3-D models to factor in the sizes and shapes of building envelopes, inaccuracies in longitude and latitude coordinates can be corrected.

Much of this research has been a race between U.S. and Chinese companies in the quest to build self-driving cars. While some 3-D models come from city planning data, in China's ever-changing urban landscape, satellite data has proved far more helpful in generating 3-D building models. Similar to Google's 3-D-generated buildings, a combination of shadow analysis, satellite imagery, and street view have proved essential for automatically creating 3-D building models rather than the manual task of user-generated, uploaded buildings or relying on city surveyors for the most recent and accurate building dimensions.

None of this data is available to the people who design cities or buildings. Both Baidu and Google have End User License Agreements (EULAs) that restrict where their data can be used, and emphasize that such data has to be used within Baidu or Google apps. Some data is made available for computer scientists and self-driving car researchers, such as Baidu's Research Open-Access Dataset (BROAD) training data sets. Most designers have to rely on free, open-source data such as Open Street Maps, a Wikipedia-like alternative to Baidu and Google Maps. By walling off valuable data that could help urban planning, tech companies are gaining a foothold and control over the reality of material life: they have more valuable insights into transport networks and the movements of people than urban designers do. It's no surprise then, that both Baidu and Google are making forays into piloting smart cities like Toronto’s Quayside or Shanghai's Baoshan District, and gaining even greater control over urban space. No doubt, urban planning and architecture are becoming increasingly automated and privately controlled in the realm of computer scientists rather than designers.

In Shoshana Zuboff's 2019 book, The Age of Surveillance Capitalism, she examines how tech companies throughout the world are employing surveillance and data extraction methods to turn users into free laborers. Our “behavioral surplus,” as she terms it, becomes transformed into products that are highly lucrative for these companies, and feature proprietary, walled-off data that ordinary users cannot access, even though their labor has helped create these products. These products are also marketed as “predictive,” which feeds the desires of companies that hope to anticipate users’ behavior—companies that see users only as targets of advertising.

Over the past several years, American rhetoric surrounding the Chinese “surveillance state” has reached fever pitch. But while China is perceived to be a single-party communist country with state-owned enterprises that do its bidding, the truth is, since the 1990s, much of the country’s emphasis has been on private growth. Baidu is a private company, not a state-owned enterprise. Companies like Baidu have majority investment from global companies, including many U.S.-based funds like T. Rowe Price, Vanguard, and BlackRock. As China's economy slows down, the government is increasingly pressured to play by the rules of the global capitalist book and offer greater freedom to private companies alongside less interference from the government. However, private companies often contract with the government to create surveillance measures used across the country.

The rhetoric about the dangers of Chinese state surveillance obfuscates what is also happening in American homes—literally. As Google unveils home assistants that interface with other “smart” appliances, and Google Maps installed on mobile phones tracks user locations, surveillance becomes ubiquitous. Based on your location data, appliances can turn on as you enter your home, and advertisements for milk from your smart fridge can pop up as you walk by the grocery stores. Third-party data provider companies also tap into geolocation data, and combined with the use of smart objects like smart TVs, toasters, and fridges, it's easy to see why the future might be filled with such scenarios. Indeed, if you own certain smart appliances, Google probably knows what the inside of your home is like. In 2018, iRobot, the maker of the Roomba vacuum, announced that it was partnering with Google to improve the indoor mapping of homes, and now setting up a Roomba with Google Home has never been easier. Big tech companies in the U.S. would like us to believe that surveillance is worse elsewhere, when really, surveillance capitalism is a global condition.

Over the past 30 years, cities around the world have been the locus of enormous economic growth and corresponding increases in inequality. Metropolitan areas with tech-driven economies, such as the Shenzhen-Guangzhou-Hong Kong corridor and the Greater Bay Area, are home to some of the largest tech companies in the world. They are also home to some of the most advanced forms of technological urbanism: While Baidu may not have every single business mapped in rural China, it certainly has the listing of every shop in every mall of Guangzhou.

The overlap between cities as beacons of capital and as spaces where surveillance is ubiquitous is no coincidence. As Google’s parent company, Alphabet, makes moves to build cities and as Baidu aggressively pursues autonomous driving, data about a place, the people who live there, and their daily movements is increasingly crucial to the project of optimizing the city and creating new products, which in turn generates more wealth and more inequality. Places like San Francisco and Shenzhen are well-mapped by large tech companies but harbor some of the worst income gaps in the world.

The "smart city" urbanism enabled by surveillance and ubiquitous data collection is no different from other forms of development that erode affordable housing and public space. Reclaiming our cities in this digital age is not just about reclaiming physical space. We must also reclaim our data.

Placeholder Alt Text

A French startup is using drones and AI to save the world's architectural heritage

Now active in over 30 countries around the world, French startup Iconem is working to preserve global architectural and urban heritage one photograph at a time. Leveraging complex modeling algorithms, drone technology, cloud computing, and, increasingly, artificial intelligence (AI), the firm has documented major sites like Palmyra and Leptis Magna, producing digital versions of at-risk sites at resolutions never seen, and sharing their many-terabyte models with researchers and with the public in the form of exhibitions, augmented reality experiences, and 1:1 projection installations across the globe. AN spoke with founder and CEO Yves Ubelmann, a trained architect, and CFO Etienne Tellier, who also works closely on exhibition development, about Iconem’s work, technology, and plans for the future. The Architect's Newspaper: Tell me a bit about how Iconem got started and what you do. Yves Ubelmann: I founded Iconem six years ago. At the time I was an architect working in Afghanistan, in Pakistan, in Iran, in Syria. In the field, I was seeing the disappearance of archeological sites and I was concerned by that. I wanted to find a new way to record these sites and to preserve them even if the sites themselves might disappear in the future. The idea behind Iconem was to use new technology like drones and artificial intelligence, as well as more standard digital photography, in order to create a digital copy or model of the site along with partner researchers in these different countries. AN: You mentioned drones and AI; what technology are you using? YU: We have a partnership with a lab in France, the INRIA (Institut National de Recherche en Informatique/National Institute for Research in Computer Science and Automation). They discovered an algorithm that could transform a 2D picture into a 3D point cloud, which is a projection of every pixel of the picture into space. These points in the point cloud in turn reproduce the shape and the color of the environment, the building and so on. It takes billions of points that reproduce the complexity of a place in a photorealistic manner, but because the points are so tiny and so huge a number that you cannot see the point, but you see only the shape on the building in 3D. Etienne Tellier: The generic term for the technology that converts the big datasets of pictures into 3D models is photogrammetry. YU: Which is just one process. Even still, photogrammetry was invented more than 100 years ago…Before it was a manual process and we were only able to reproduce just a part of the wall or something like that. Big data processing has led us to be able to reproduce a huge part of the real environment. It’s a very new way of doing things. Just in the last two years, we’ve become able to make a copy of an entire city—like Mosul or Aleppo—something not even possible before. We also have a platform to manage this huge amount of data and we’re working with cloud computing. In the future we want to open this platform to the public. AN: All of this technology has already grown so quickly. What do you see coming next? YU: Drone technology is becoming more and more efficient. Drones will go farther and farther, because batteries last longer, so we can imagine documenting sites that are not accessible to us, because they're in a rebel zone, for example. Cameras also continue to become better and better. Today we can produce a model with one point for one millimeter and I think in the future we will be able to have ten points for one millimeter. That will enable us to see every detail of something like small writing on a stone. ET: Another possible evolution, and we are already beginning to see this happen thanks to artificial intelligence, is automatic recognition of what is shown by a 3D model. That's something you can already have with 2D pictures. There are algorithms that can analyze a 2D picture and say, "Oh okay, this is a cat. This is a car." Soon there will probably also be the same thing for 3D models, where algorithms will be able to detect the architectural components and features of your 3D model and say, "Okay, this is a Corinthian column. This dates back to the second century BC." And one of the technologies we are working on is the technology to create beautiful images from 3D models. We’ve had difficulties to overcome because our 3D models are huge. As Yves said before, they are composed of billions of points. And for the moment there is no 3D software available on the market that makes it possible to easily manipulate a very big 3D model in order to create computer-generated videos. So what we did is we created our own tool, where we don't have to lower the quality of our 3D models. We can keep the native resolution quality photorealism of our big 3D models, and create very beautiful videos from them that can be as big as a 32K and can be projected onto very big areas. There will be big developments in this field in the future. AN: Speaking of projections, what are your approaches to making your research accessible? Once you've preserved a site, how does it become something that people can experience, whether they're specialists or the public? YU: There are two ways to open this data to the public. The first way is producing digital exhibitions that people can see, which we are currently doing today for many institutions all over the world. The other way is to give access directly to the raw data, from which you can take measurements or investigate a detail of architecture. This platform is open to specialists, to the scientific community, to academics. The first exhibition we did was with the Louvre in Paris at the Grand Palais for an exhibition called Sites Éternels [Eternal Sites] where we projection mapped a huge box, 600 square meters [6,458 square feet], with 3D video. We were able to project monuments like the Damascus Mosque or Palmyra sites and the visitors are surrounded by it at a huge scale. The idea is to reproduce landscape, monuments, at scale of one to one so the visitor feels like they’re inside the sites. AN: So you could project one to one? ET: Yes, we can project one to one. For example, in the exhibition we participated to recently, in L'Institut du monde arabe in Paris, we presented four sites: Palmyra, Aleppo, Mosul, and Leptis Magna in Libya. And often the visitor could see the sites at a one to one scale. Leptis Magna was quite spectacular because people could see the columns at their exact size. It really increased the impact and emotional effect of the exhibition. All of this is very interesting from a cultural standpoint because you can create immersive experiences where the viewer can travel through a whole city. And they can discover not only the city as a whole but also the monuments and the architectural details. They can switch between different scales—the macro scale of a city; the more micro one of the monument; and then the very micro one of a detail—seamlessly. AN: What are you working on now? ET: Recently, we participated in an exhibition that was financed by Microsoft that was held in Paris, at the Musée des Plans-Reliefs, a museum that has replicas of the most important sites in France. They're 3D architectural replicas or maquettes that can be 3 meter [apx. 10 feet] wide that were commissioned by Louis XIV and created during the 17th century because he wanted to have replicas to prepare a defense in case of an invasion. Recently, Microsoft wanted to create an exhibition using augmented reality and they proposed making an experience in this museum in Paris, focusing on the replicas of Mont-Saint-Michel, the famous site in France. We 3D scanned this replica of Mont-Saint-Michel, and also 3D scanned the actual Mont-Saint-Michel, to create an augmented reality experience in partnership with another French startup. We made very precise 3D models of both sites—the replica and the real site—and used the 3D models to create the holograms that were embedded and superimposed. Through headsets visitors would see a hologram of water going up and surrounding the replica of Mont-Saint-Michel. You could see the digital and the physical, the interplay between the two. And you could also see the site as it was hundreds of years before. It was a whole new experience relying on augmented reality and we were really happy to take part in this experience. This exhibition should travel to Seattle soon.
Placeholder Alt Text

Bureau de Change unveils five-story building with undulating brick facade

London’s Fitzrovia neighborhood is a bit of an architectural collage. There are 18th- and 19th-century brick homes interspersed with 20th-century concrete housing blocks and, at its far east end, John Nash’s All Souls Church. The London firm Bureau de Change was asked to create a building sandwiched between two of the many simple brick buildings in the area, and as much as the firm hoped to respect this context and legacy, the designers wanted to disrupt it too. Their “Interlock” building, a five-story residential building with a street-level cafe and a gallery below, features an undulating facade of cool gray-blue bricks that enliven the building's elongated form. “We were interested in taking these very traditional proportions and in some way subverting it,” explained co-founder and director Katerina Dionysopoulou, “like a puzzle box that seems familiar and reveals a hidden complexity that increases the more you interact with it.” Considering how a brick facade could respond to openings and fenestration, rather than just frame them, Bureau de Change designed ripples that seems to grow out of and shrink into the windows and doors. The firm modeled 14 brick types and worked closely throughout the research and development process with brickmakers to iterate forms that would fit together, successfully fire in the kiln, and remain strong when arranged. Once the final forms were decided, handmade metal molds were made and bricks—5,000 of them—were cast. In addition to the 14 “parent” brick morphologies, 30 related “offspring” bricks were derived during the modeling and testing process, for a total of 44 different brick types. “We were walking the line of what would be technically possible, but through this process, found a point that was both buildable and produced the richness and movement we were trying to achieve,” said co-founder and director of Bureau de Change Billy Mavropoulos, on striking a balance throughout conception, design, and construction. The intensive modeling process allowed the designers to control the placement of each individual brick, and installers were given 3D files so they could align each of the rows correctly, which, due to the unusual nature of the structure, required millimeter-level accuracy and specific adaptations for each region of the facade. Between the windows, steel trays were necessary to support the bricks, but installers had to expertly align them so that the frames’ edges would be hidden. Similarly, the layout had to be crafted in such a way that holes within the bricks that help with their load-bearing ability were hidden despite the many odd angles they were placed at. The bricks were made of Staffordshire Blue Clay and were fired in oxidation to create the matte finish. Building the 5,000-brick “landscape” took three months. “The fabrication team used 1:1 printed templates that set out the number, typology, and location of each brick,” explained the firm, going on to say that the 188 templates acted as a sort of “construction manuscript.” The process was defined by collaboration between Bureau de Change, the brick manufacturers Forterra, and the construction team, and, as comes with working iteratively on untested ideas, it took a great deal of trial and error.
Placeholder Alt Text

SHoP Architects created an iPhone app to construct the Botswana Innovation Hub

New York’s SHoP Architects has created proprietary technology that is making it easier for them to organize materials during construction. During the construction of the Barclays Center from 2008 to 2012, the firm developed a novel iPhone interface capable of scanning facade components during fabrication, assembly, transport, and installation to keep an up-to-date digital catalog of the status of construction. Now, the firm is applying this comprehensive platform to the construction of its Innovation Hub located in Gaborone, Botswana, where on-site contractors can effortlessly scan recently installed items while checking in on the overall progress of the project.

The Botswana Innovation Hub is an ambitious project. The 310,000-square-foot facility is set to be the country’s first LEED-certified building, and environmental performance is significantly impacted by the structure’s complex assembly. SHoP designed an “Energy Blanket” roofscape, which incorporates large overhangs to shade interior spaces and collect rainwater for re-use. Photovoltaic panels are placed across the roofscape to further boost environmental performance.

The project’s complexity is further heightened by the incorporation of an undulating facade that projects off and indents the structural system. SHoP’s mobile interface plays an essential role in the project’s logistics and construction. The application labels each element—i.e AA2000—and the number of identical units. Each unit type is assigned to a specific construction crew that tracks the units in a database throughout assembly.

The interface has come a long way since its inception a decade ago. Initially a stitching together of off-the-shelf software applications commonly used by architects and contractors (Autodesk, NavisWorks, Filemaker), SHoP Architects has rewritten the code in-house, which allows for more seamless and scalable linking and visualization of 3-D models to live data. Why is this significant? SHoP can now take a holistic portfolio approach to track projects from earlier phases. In the next year, SHoP Architects hopes to implement its mobile interface across all of its projects.

Placeholder Alt Text

A simple 3-D imaging platform could change the way you see architectural plans

The process of drawing up architectural plans is in flux. For more than a decade, assorted software programs have attempted to bridge the gap between two and three dimensions by taking flat drawings or field data collected onsite and migrating them into modeling platforms, which create photorealistic renderings or 3-D virtual reality walkthroughs of a project. But what if the process began with what exists in three-dimensional time and space instead? That’s a question 3-D media company Matterport has answered by creating what it calls True3DTM imaging of real-world environments using its proprietary platform, which includes its Pro2 3D camera and cloud-based services. “What’s really happening in the industry is this shift and transition that’s been happening for a number of years from 2D to 3D,” noted Matterport’s Director of AEC, John Chwalibog. “That’s where everything has been going and continues to go with some of the augmented reality and virtual reality type technologies. It’s kind of taken 3D to another level of ‘actual 3D,’ or what we at Matterport call ‘True3D,’” he said. “So, instead of having rendered models of the design process, especially with existing spaces, why not start with the actual model of the space and use that as the backdrop to really drive and start that design process?”

How It Works

What makes the Matterport platform unique is its simplicity and efficiency. Scanning a room in 3-D previously required trained technicians and complicated software that took countless hours to capture and process. Matterport has simplified the experience by creating its Capture app for iPad that allows users to operate its 3-D camera with the push of a single button, which takes just 20 to 30 seconds per scan (total time to document an entire building depends on the size). Once the space has been captured, users can upload the data to Matterport, which processes and hosts the files for sharing within a matter of hours, Chwalibog says. The Pro2 camera generates high-quality 4K, 2-D photography in addition to state-of-the-art 3-D and VR walkthroughs and floorplans all from one device. The software and cloud services work together to capture and automatically weave together thousands of digital 3-D images into an accurate, immersive photorealistic model that can be shared, annotated, and exported to a variety of tools such as Autodesk ReCap or Revit, improving efficiency and collaboration on the front-end of the project. Additionally, Matterport adds value to the entire building lifecycle, including construction documentation and maintenance. Chwalibog notes that on the job site, project team members use the technology to record milestones, such as when structural steel, foundation work, rough walls, plumbing, mechanical, electrical, or plumbing are in place. “What you end up with is a series of three-dimensional models of particular moments in time during that construction process, so it meets the needs of construction documentation to actually document what was built,” he said. “I can identify any changes from what was actually built versus the design intent, so now I can proactively make some decisions about how that’s going to impact the next milestone.” Likewise, Chwalibog says the three-dimensional record of the project gives property owners much better intelligence about existing conditions which can prove invaluable years later if or when a renovation takes place.
Placeholder Alt Text

Explore parts of Sir John Soane’s Museum from the comfort of your computer

“Welcome to Explore Soane. The historic house, museum, and library of 19th-century architect Sir John Soane—now made digital. Get closer than ever before to its fascinating objects and see its eclectic rooms in a new light.” These words welcome viewers as they enter the new digital model of the Sir John Soane’s Museum, recently launched by ScanLAB Projects. Sir John Soane was a noted 19th-century British architect who passed away in 1837, leaving behind not simply a home, but a museum of architectural curiosities for posterity. Established by Private Act of Parliament in 1833, the house-museum has been kept just as Soane left it at the time of his death, continuing to offer free access to visitors as he had intended. Safeguarded by its Trustees, the museum hosts exhibitions, events, and a research library. The Sepulchral Chamber. (Via explore.soane.org) The Sepulchral Chamber. (Via explore.soane.org) The museum's digital model offers visitors the choice to begin their journey in the Model Room or the Sepulchral Chamber. The Model Room includes models of historical architectural sites such as Temple of Vesta (made from cork), Temple of Vesta (made from plaster) and a Model of Pompeii, showing the city in 1820. The replica of the room features individual, digitized models available for download. The interactive elements of the room also include fact sheets for models in Soane’s collection, which can be found upon clicking on each model. As viewers move on to The Sepulchral Chamber, they can find interactive models of an ancient Egyptian Sarcophagus King Seti I and Sarcophagus Detail. This portion of the journey also provides fact sheets and an about page for items in the chamber. ScanLab Projects is a creative studio that works to combine 3-D technologies and large scale scanning with the architectural and creative industries, creating digital replicas of buildings, landscapes, objects, and events. They offer 3-D printing, 3-D scanning, and visualization services to digitize the world in captivating ways. ScanLAB Projects also plans to add more rooms and works of art to the model.
Placeholder Alt Text

A giant 3D printer will replicate an ancient temple destroyed by ISIS

Replicas of the entrance arch of the ancient Temple of Bel in Palmyra, Syria, will be recreated using a giant 3D printer for World Heritage Week in London and New York. The recreations are intended to defy the actions of extremist group the Islamic State in Iraq and Syria (ISIS), which destroyed a large portion of the nearly 2,000-year-old temple building in August of last year. The arch, which is nearly 50 feet high, is one of the few relics standing after ISIS sought to systematically destroy Palmyra in an effort to erase the pre-Islamic history of the Middle East. https://twitter.com/middleeasthist/status/683281394202742784 Before the conflict in Syria ignited in 2011, Palmyra’s rich cultural heritage drew more than 150,000 tourists each year. The temple, which was founded in A.D. 32 and consecrated to the Mesopotamian god Bel, was exemplary of the fusion of Middle Eastern, Greek, and Roman influences and was considered to be one of the most important sites in Palmyra. The temple was converted into a Christian church during the Byzantine era, and then into a mosque when Islam arrived around the 7th century. In recent times, the Temple of Bel was an important cultural venue for Syrians, acting as a setting for concerts and events. The Institute for Digital Archaeology (IDA), a joint venture between Harvard University, the University of Oxford, and Dubai’s Museum of the Future that promotes the use of digital imaging and 3D printing in archaeology and conservation, is taking the lead on the recreation efforts. Last year, the organization collaborated with UNESCO in the distribution of 3D cameras so that volunteer photographers could document threatened cultural objects in areas of conflict in the Middle East and North Africa. The images are to be uploaded to a “million-image database” for use in research, educational programs, and ultimately 3D replication, as in the case of the Temple of Bel. Although the Temple of Bel was demolished before photographers with 3D cameras could capture it, researchers at the IDA have been able to create 3D approximations of the temple using ordinary photographs. The full-size replica arches, to be made from stone powder and a lightweight composite, will be created off-site and then assembled in Trafalgar Square and Times Square for display this April.
Placeholder Alt Text

This Seattle architect built a basement man cave housing 250,000 neatly arranged LEGO bricks

One Seattle architect’s much ballyhooed basement isn’t built from LEGO bricks, but it houses 250,000 of them in 150 meticulously sorted bins. Jeff Pelletier, who runs a small architecture practice Board & Vellum, has amassed a collection worth an estimated $25,000, with containers categorized by color, food, Lego leaves, heads, torsos, Lego latticework, satellite dishes, legs, gold bricks, red bricks, and lime. When Pelletier bought the unfurnished house in 2006, he found a lone red Lego brick in the attic and construed it as a sign that it was the place to put down roots–and his LEGO man cave. Like many aficionados of the self-adhering plastic bricks, Pelletier has been collecting since toddlerhood. At age 16, he relegated his collection to the storage room, unearthing it again in 2005 when he resumed collecting and acquired the collection he has today. When he remodeled his whimsical-looking lime-and-raspberry home in 2011, he decided to transform his basement into a media room, bar, and giant Lego repository, where Pelletier has built a Lego library, ships, bars, houses he’s lived in and even a miniature version of his brightly colored home. “Since I was 2 years old, I always wanted to be an architect. I think a lot of that was because of LEGO,” Pelletier told Komo News.
Placeholder Alt Text

Justin Diles Breaks the Mold for TEX-FAB

Competition winner uses composite materials to re-imagine Semper's primitive hut.

The title of TEX-FAB's fourth annual competition—Plasticity—has a double meaning. It refers first to the concept at the core of the competition brief: the capacity of parametric design and digital fabrication to manifest new formal possibilities. But it also alludes to the material itself, fiber-reinforced polymer (FRP). “Plastics have the potential to push contemporary architecture beyond the frame-plus-cladding formula dominant since at least the 19th century,” said competition winner Justin Diles. Pointing to traditional stonecutting and vault work, he said, "I'm very interested in this large volumetric mode of construction, but I'm not at all interested in the stone. I think that composites probably offer the best way of addressing this old yet new mode of constructing architecture." Diles' proposal, Plastic Stereotomy, builds on his work as a KSA fellow at The Ohio State University. But where his earlier Eigenforms were two-dimensional freestanding walls, Diles' Plastic Stereotomy pavilion—which he will build at scale during the coming months—is fully three-dimensional. Inspired by teaching tools designed by Robert le Ricolais, Diles used a finite element analysis 3D modeling plugin to simulate surface buckling by superimposing volumes onto one another. "Those pieces are voluptuous; they create a lot of poché [thickness] as they overlap with one another," Diles observed. While the plugin developed by his friend was critical to the design process, Diles remained focused throughout on the end goal of fabrication. "What I'm really looking at is how we can use simulation to think about issues of construction rather than just optimization," he said. Custom fabrication shop Kreysler & Associates will provide technical support as Diles moves from design to construction. Diles cites the fire-resistant FRP cladding developed by Kreysler for Snøhetta's SFMOMA as an example of how composite materials can ease the transition from two-dimensional to volumetric design. "Even though the project still adheres to Gottfried Semper's model of a lightweight frame and cladding, the panels don't have a frame expression," he said. "They're massive, with ripples and indentations. They point to a new way of thinking about architectural surface and enclosure."
  • Fabricator Justin Diles
  • Designers Justin Diles
  • Location Los Angeles, CA and Houston, TX
  • Date of Completion 2014 (prototype), 2015 (full-scale pavilion)
  • Material FRP, paint, glue, bolts, solid foam blocks
  • Process 3D modeling, FEM, CNC milling, molding, painting, glueing, bolting
Kreysler and Diles will work together to streamline the techniques he used to build his competition prototype, a scaled-down section of the Plastic Stereotomy pavilion. (Bollinger + Grohmann will provide additional structural and material engineering support.) For the mockup, Diles used a 5-axis CNC mill to shape EPS foam molds onto which he layered up FRP cloth. He then removed the pieces from the molds, painted them, and glued and bolted them together, adding stiffeners to the open-backed components. Because the FRP is so light, he used two solid foam blocks to weigh down the structure. "I'm interested in working with Kreysler around thinking through production to make it more efficient," said Diles. For the fabricators, the TEX-FAB collaboration represents another step in Kreysler's journey from boat-building to other applications of composite materials, including architecture. "We're excited to work on this with Justin," said Kreysler's Josh Zabel. "It's exciting to see designers put fresh eyes on these materials we're devoted to." Plastic Stereotomy will be on display at TEX-FAB 2015 Houston at the University of Houston College of Architecture, March 26-29. The conference will feature workshops, lectures, and an exhibition on the theme of Plasticity.
Placeholder Alt Text

Review> The New Normal: Penn Symposium Explores Generative Digital Design

[Editor's Note: The following review was authored by Gideon Fink Shapiro and Phillip M. Crosby.] A generation’s worth of experimentation with generative digital design techniques has seemingly created a “new normal” for architecture. But what exactly are the parameters of this “normal” condition? On November 14th and 15th Winka Dubbeldam, principal of Archi-Tectonics and the new Chair of the Department of Architecture at the University of Pennsylvania, called together some of contemporary architecture’s most prominent proponents of generative digital design techniques for a symposium, The New Normal, examining how these techniques have transformed the field over the past twenty years. According to Ms. Dubbeldam and her colleagues in Penn’s post-professional program who organized the symposium, digital tools have “fundamentally altered the way in which we conceptualize, design, and fabricate architecture.” Participants were asked not only to reflect upon the recent past, but also to speculate on future possibilities. Even among this select group of practitioners, the shared enthusiasm for digital techniques does not imply an affinity of beliefs or approaches. While Patrik Schumacher (who, notably, lectured at Penn one week later) would have us believe that parametric techniques will triumphantly lead to a New International Style, what the New Normal symposium revealed was not a singular orthodoxy, but rather a rich multiplicity of approaches. On the one hand, one perceives a renewed sense of craftsmanship in which computation and robot-assisted fabrication can "extend the potential of what the hand can do," in the words of Gaston Nogues of Ball-Nogues Studio. On the other hand, ever-increasing computational and 3D-modeling power have nourished a whole field of virtual "screen architecture" that follows in the tradition of conceptual and utopian proposals. In his opening keynote address, Neil Denari discussed several contemporary artists—from Gerhard Richter to Tauba Auerbach—who use or misuse tools to elicit unexpected results. Similarly for architects, the computer should be seen as a filter or intermediary tool between author and work, rather than a seamless executor of authorial will. More pointedly, Roland Snooks of Kokkugia asked, "What are the behavioral biases of digital design tools?" He then suggested that contemporary architects might need to invent and design their own tools (software plug-ins and algorithms) in parallel with the architecture. Simon Kim of IK Studio went so far as to attribute to machines an agency once reserved for humans. And Francois Roche of New-Territories Architects said, "We have to torture the machine" to stretch its conventional functions, teasing out new "erotic bodies" and "ways to tell a story" through playful cunning. Lou Reed, David Bowie, and Jimi Hendrix were all invoked, but not by the speaker who wore sunglasses during his talk—Jason Payne of Hirsuta. Citing previously published remarks by Jeffrey Kipnis and Greg Lynn, Payne urged architects to test the assumed limits of their digital instruments, just as Hendrix pushed the limits of his guitar by playing it upside-down and incorporating electronic feedback in his radical performance of the “Star-Spangled Banner” at Woodstock in 1969. However, Payne cautioned, as he cued a slide of Eddie van Halen, the pursuit of technical virtuosity alone can lead to manneristic excess. Indeed, what made Hendrix's Woodstock performance great was not only his innovative guitar work but also his subversive and liberating rendition of the national anthem at a time of social upheaval, sharpened by his insider-outsider status as an African-American rock star. The point is, instrumentation cannot necessarily be isolated from the substance of a work and the social conditions in which it is produced. Tobias Klein gave voice to the digital zeitgeist in declaring, "We [human beings] are soft, malleable data sets." Yet if everything is now data, including bodies and buildings, how and to whose advantage is that data analyzed and applied? Selection criteria are inevitably human constructs that may take the form of artistic judgment, energy metrics, economic models, or political values. Ben van Berkel of UNStudio hinted at the conundrum of data analysis in his concluding keynote, in which he listed "different scales at which information comes together"—namely the diagram, the design model, and the prototype. But alas the Dutch architect, an acknowledged master of the diagram, did not elaborate on how, exactly, his office wrangles messy information into a clear design mandate. One notable absence from the slate of participants in the symposium was a critic or historian to situate the New Normal within both the history of architectural practice and the wider milieu of contemporary culture. While one of the most prominent theorists of generative design, Manuel De Landa, made important contributions to the discussions, his comments focused not on situating the discourse, but instead on the artistic repurposing of non-linear, morphogenetic tools developed by scientists to create more personalized digital form-finding devices. Also lacking were the voices of women, who numbered only three out of twenty speakers and moderators, including Ms. Dubbeldam. What the relentless experimentation among the symposium’s participants suggests is that, while there may be a new normal for the practice of architecture, it has yet to become normative—and that is a sign of its vitality.