In one of the oldest neighborhoods in Cleveland, a group of architects, designers, and software developers are imagining the future of citizen-led urban development. Collective Reality: Image without Ownership took over an empty ground-floor retail space in Slavic Village earlier this month, featuring a low-fi installation of bright red foam, matte-black steel frames and an invisible, virtual overlay of crowdsourced urban objects. The installation, as explained by the creators, was meant to “allow citizens to engage in conversations about urban development by creating images of possible neighborhood futures.” The team behind this piece, Laida Aguirre (stock-a-studio), McLain Clutter and Cyrus Peñarroyo (EXTENTS), and Mark Lindquist, hailing from the University of Michigan Taubman College of Architecture + Urban Planning and the School of Environment and Sustainability, collaborated directly with the Slavic Village Development nonprofit group and LANDstudio to create a space which is referred to as a “laboratory for the development of the Collective Reality software.” The software, programmed by two other University of Michigan researchers, Frank Deaton and Oliver Popadich, is an augmented reality application that filled the exhibition space with a growing collection of virtual objects, spaces and, to the expectations of its creators, prospects of a new imagined city. Slavic Village, located near the industrial valley of Cleveland, has experienced a difficult decade of stagnant development after a majority of properties foreclosed during the 2007 financial crisis. While the housing bubble’s burst may seem like the primary culprit for its decrepit state, the neighborhood fits a list of textbook definitions for urban decline: The rapid disappearance of manufacturing, declining populations, loss of urban amenities, high amount of low-quality housing, poverty, and crime. Perhaps the most relevant ingredient in this cocktail of urban depression is the lack of outside investment, where only a few courageous individuals have decided to stake a claim in the future of this important area. It is this last ingredient which Collective Reality attempts to confront. Conventional urban development depends on capital to both create and envisage change; growth depends on how well an idea can be imaged, presented, and sold, typically consuming vast amounts of resources during its approval processes. Slick renderings require advanced computing and educated skill sets. Maps and other forms of urban planning communication are criticized for their exclusivity to the disciplines which produced it. Community board meetings, one potential space for citizen engagement, often take place in difficult to reach places or during times of which individuals can not afford to attend. These structures of urban development privilege wealth over local embedded knowledge, especially in places like Slavic Village where the socioeconomic divide is drastic. The team of Michigan-based researchers questions this status quo, asking if technology—specifically augmented reality—can offer opportunities to separate imagination from monetary means. The installation's interactive process empowers citizens to bridge this planning gap through devices more familiar to the everyday urban user. Upon entering the space, visitors are presented with a prompt—a request to capture several photographs of favorite spaces, places, and objects around the neighborhood with no more than a camera phone. Photographs are sent to the researchers, photogrammetrically transformed into three-dimensional objects, and then placed within the virtual environment of the gallery space. Visitors were encouraged to use one of the provided tablets to interact, manipulate and explore the collective imagination embedded within the augmented reality application. The physical installation, while seemingly in competition with its virtual counterpart, offered material targets for the application to recognize and attach to. In reality, the exhibition was no more than a funhouse of soft foam blocks to play with and climb on, at least in the minds of the children that visited. While the creators and their beta-stage augmented reality software ask important questions on citizen engagement, bottom-up planning, and collective empowerment in the age of ever-increasingly accessible technology, the physical nature of the gallery permits its users to actually act out their collective imagination. The bare, unadorned geometries of the red foam and steel frames were reminiscent of the simplistic playgrounds designed by Aldo van Eyck in post-war Amsterdam. It was the playground, he argued, which literally gives space to the imagination. This unintentional consequence of Collective Reality points out an important aspect of community development: the spaces and architectures which promote social interactivity are vitally important to the creative imagining of possible futures. Collective Reality: Image without Ownership ended on October 19, 2019. The gallery is located at 5322 Fleet Avenue, Cleveland, OH 44015.
Posts tagged with "Augmented Reality":
Video game software suites like Unreal Engine and Unity have made their way into the architectural arsenal with AEC firms like Skanska, Foster + Partners, and Zaha Hadid Architects using them to visualize and test new buildings. However, these tools weren’t necessarily built with AEC professionals in mind and while they often result in nice-looking environments, they don’t generally offer much in the way of architecture-specific functionality like the ones architectural designers have come to rely upon in BIM and CAD software. To help bridge this gap, the company behind Unity is testing a new piece of software called Reflect. “Unity Pro is a super powerful tool that people use it for creating design walkthroughs and custom application development,” said Tim McDonough, vice president at Unity, “but these firms have a whole bunch of people that would like to be able to view their Revit data easily in a 3D engine like Unity without having to be a software developer, which is what are our current tools built for.” Reflect, which will launch publicly this fall, connects with existing software suites like Revit and Trimble to leverage the vast amounts of data that designers and contractors rely upon, and uses it to create new visualizations, simulations, AR, and VR experiences. Users can view and collaborate across BIM software and Reflect, which are synchronized in real-time across multiple devices for both desktop and mobile. “Users were saying it took them weeks to get data out of Revit into Unity and by the time they got it out, the project had moved on and what was done was irrelevant,’” said McDonough. “We’ve taken out the drudgery so that now what used to take weeks takes just minutes.” https://youtu.be/YnwcGfr0Uk0 A number of firms have already been putting Reflect to the test. Reflect is open source and allows users to develop their own applications, whether for use in their firm or for a broader architectural public. SHoP Architects has been trying out Reflect since the software entered its Alpha phase this summer, creating various solutions to test on their supertall project at 9 Dekalb Avenue in Brooklyn. Adam Chernick, an associate at SHoP focusing on AR and VR research, noted that while showing off buildings in software like Unity has become part of standard practice, getting those visualizations attached to critical information has been a challenge up until now. “It hasn't been super difficult to get the geometry into the game engines," he said, "but what has been even more difficult is getting that data into the game engines." One of the first uses for Reflect that the SHoP team devised was an AR application that allowed them to monitor the progress of 9 Dekalb and easily oversee construction sequencing using color-coded panels that map onto the building’s model in their office. Chernick explained that there was a huge amount of exterior window panels to keep track of and that the app really helped. “We wanted to be able to visualize where we are in the construction process from anywhere—whether in VR or AR, and be able to get a live update of its status,” he said. “Now we can watch the building being constructed in real-time.” The SHoP team has also leveraged the power of Reflect—and its integration with Unity—to create new visualization tools for acoustic modeling. “We created an immersive acoustic simulator where you get to see how a sound wave expands through space, reflects off of walls, and interacts with geometry,” said Christopher Morse, an associate of interactive visualization at SHoP. “You can slow it down, you can pause it, and you can stop it.” The idea, he explained, is to help architects make acoustic decisions earlier in the design process. “Currently a lot of those acoustic decisions come later and most of the geometry is already decided,” Morse said, noting that at a certain point, all designers can really do is add carpeting or acoustic tiling. “But we want to use these tools earlier and in order for that to actually work, we needed to enable an iterative feedback loop so that you can create a design, analyze and evaluate it, and then make changes based on your analysis." With Reflect, there's also no more grueling import and export process, which Morse said prevented designers from even incorporating tools in their workflow. “Once we had Reflect, we integrated it into our existing acoustic visualization software in order to make that round trip quicker so that people can put on the headset, make a change in Revit, and instantly reevaluate based on those changes.” There is also metadata attached to the geometry, such as material information. While 9 Dekalb is too far along in its construction to incorporate the new software heavily into the design, SHoP’s begun testing out their acoustic modeling app in the lobby of the project. https://youtu.be/f0IA55N_99o Reflect could also provide BIM data in more a user-friendly package to more people working on building projects. “We think that BIM is so valuable, but not enough people get to use it,” said McDonough. “We were trying to figure out how to get BIM in the hands of people on a construction site, so everyone can see all that information at a human scale.” At SHoP, this means creating apps that contractors can use on the job. Currently, their AR apps work on mobile devices, but SHoP hopes that, as AR headsets become more mainstream, they’ll also be able to use the apps on products such as the HoloLens. “This could be a paradigm shift,” says Chernick. “We realize that this massive, thousand-sheet set of construction documents that we need to create in order to get a building built is not going anywhere soon. But what we can do is help make this process more efficient and help our construction teams understand and potentially build these projects in more efficient ways.”
Morpholio, the architect-turned-developer-run company known for its Trace app that blends augmented reality, digital hand drafting, and other architectural tools on portable devices, has brought its interior design program, Board, to desktops for the first time. Coming on the heels of the new Mac Catalina operating system update, the desktop version of Board leverages the new MacCatalyst developer tool which allows for translating iOS apps to desktop more simply. Board, which is intended to apply a mood-board logic to technical interior design problems, has been designed for not only professionals but to make home design easier for average consumers. That said, with Board for Mac, Morpholio hopes to “take advantage of the unique properties of the desktop environment," says Morpholio co-founder Mark Collins in a press release from the company, “which is essential for professional work.” The desktop app will include mood board “super tools,” such as layer control and magic wand selection and deletion, as well as a feature called “Ava,” which creates spec sheets for clients and contractors. Ava gives automatic suggestions to match color and forms, and libraries of products from larger companies like Herman Miller and Knoll and smaller designers like Eskayel. It will also include new export features and provide further compatibility with Adobe and Autodesk products (as well as Pinterest). In addition, while Board for mobile already has AR features that allow for furniture to be placed in space at scale, the desktop version will allow for VR integration. “A typical furniture catalog would rely on still images,” says Morpholio co-founder Anna Kenoff, “but Board allows you experience expertly rendered models, created by the storytellers at Theia Interactive. You can view and spin around your favorite furniture pieces and experience them in every dimension. You can zoom in to stitching and materiality and feel the shade and shadows on their forms.” Additional viewing and presentation features will be built in as well and Board will take full advantage of Catalina’s updated Dark Mode for those who prefer to use it. “When Apple released MacCatalyst, they definitely had creative professionals in mind,” says Kenoff of the recent Apple release. “They wanted to amplify the power of mobile apps by combining them with the precision capable on a Mac. Few architects and designers work exclusively on a laptop, desktop or tablet. We hope to make our apps available wherever designers are working.”
In Tallinn, Estonia, a knotted wooden structure that combines both new and old technology has won the Huts and Habitats award at the Tallinn Architecture Biennale. Curated by Yael Reisner under the theme “Beauty Matters,” the biennale seeks to celebrate the beauty in opposition to architectural environs that can often be isolating, alienating, and ecologically unsound. Steampunk, as the installation is called, is designed to show off the latest in tech while retaining a human touch. It was designed by Soomeen Hahm and Igor Pantic, who both teach at the Bartlett, as well as Cameron Newnham and Gwyllim Jahn of software company Fologram, and constructed along with the engineers at Format and the Estonian lumber building specialists Thermory. Standing 13 feet tall, the thermally-modified pavilion is made of steam-bent ash wood, with hand-crafted elements sitting side-by-side with parts that have been CNC-milled and 3D printed; blurring the boundaries between the analog and digital in process and production. Steampunk was also designed in part using mixed reality tech, further complicating this “human-machine collaboration,” as biennial juror Areti Maropoulo put it. “The structure challenges the idea of the primitive hut—showing how, by using algorithmic logic, simple raw materials can be turned into a highly complex and inhabitable structure,” said Gilles Retsin, TAB 2019’s Installation Program Curator, in a release from the biennale. “[Steampunk] consists of a bespoke merging of craft, immersive technologies, and material performance, for the production of dynamic organic forms that surpass building limitations of local precision or of the pure automate,” explained Areti Markopoulo, head of the jury for the installation program, in a press release. The pavilion is the latest in a long line high-tech timber installations, as architects, researchers, and educators all try their hand at pushing the boundaries of what timber can do; take Cornell University’s Robotic Construction Laboratory's LOG KNOT, for example. Steampunk will be on view until 2021.
Retail is dead. Long live retail. With the ubiquity of online shopping, brick-and-mortar retail has become more competitive. Good deals and low prices aren't enough to draw customers into stores anymore; today's customers are looking for experiences, according to developers and retail prognosticators. Canadian outdoor goods retailer Mountain Equipment Co-op (MEC) has teamed up with creative technology from Finger Food to offer an in-store—or in-home—experience that bridges the digital and the physical: augmented reality tent shopping. "Retail has gone through significant disruption and it's only going to get faster," said David Labistour, CEO of MEC. The outdoor company sees this disruption as a unique opportunity for growth. MEC offers more tents than can fit in their stores. Rather than hanging excess tents from the ceiling, MEC asked Finger Food to develop an application that would allow customers using a phone, tablet, or AR/VR goggles to see and explore a full-scale, fully rendered (inside and out) 3D version of every single tent that MEC sells. What's special about this particular use of the increasingly common AR technology is the unprecedented level of detail Finger Food was able to achieve. Finger Food create their ultra-realistic 3d models in an enormous room they call the holodeck — named after the high-tech virtual reality rooms in Star Trek. Using a proprietary photogrammetry rig and accompanying software, the company can take thousands of photos of any object to capture its geometries and textures at extremely high resolution. In addition to the realism, Finger Food's solution is distinguished by its speed—scanning an object requires less than an hour, compared to days that could be spent creating a 3D model from scratch—and the system has proven its capability to capture objects of any scale, from a pair of sunglasses to a semi-truck. Their work for MEC isn't Finger Food's first foray into the retail space. The group has previously worked with Lowe's home improvement stores to develop two augmented reality apps. One lets users see what products look like in their homes—everything from accent tile to a six-burner stove—and easily make a purchase afterward. The other app guides users through Lowe's 1000,000-square-foot stores to find the exact products they're looking for; it also notifies employees when an item needs restocking. Customers can currently use the AR application at MEC's flagship Toronto store, with a larger rollout planned. "We believe the future of the customer experience will be significantly changed through the integration of technology," said Labistour. If these technologies prove successful, the retail experience and store design could be changed as well. In a future with augmented reality and next-day delivery, less space may be needed in stores as fewer items would be kept on display and in stock.
New York's New Museum, which has already launched a fair share of tech-forward initiatives like net-art preservation and theorization platform Rhizome and NEW INC, has teamed up with Apple over the past year-and-a-half to create a new augmented reality (AR) program called [AR]T. New Museum director Lisa Phillips and artistic director Massimiliano Gioni selected artists Nick Cave, Nathalie Djurberg and Hans Berg, Cao Fei, John Giorno, Carsten Höller, and Pipilotti Rist to create new installations that display the artistic potential of AR and help advance the museum’s own mixed-reality strategy. Each of the artists will create interactive AR artworks that can be viewed via iPhones with the [AR]T app on “choreographed” street tours that will begin in a limited number of Apple stores across six cities. Users will be able to capture the mixed reality installations in photos and video through their phones. Additionally, Nick Cave has created an AR installation titled Amass that can be viewed in any Apple store, and the company has worked with artist and educator Sarah Rothberg to help develop programs to initiate beginners into developing their own AR experiences. This announcement comes on the heels of much industry AR and VR speculation regarding Apple, in part encouraged by recent hires from the gaming industry, like that of Xbox co-creator Nat Brown, previously a VR engineer at Valve. While some artists, institutions, and architects have embraced AR and VR, many remain skeptical of the technology, and not just on artistic grounds. Writing in the Observer, journalist Helen Holmes wonders if “Apple wants the public to engage with their augmented reality lab because they want to learn as much about their consumers as possible, including and especially how we express ourselves creatively when given new tools.” The [AR]T app will drop on August 10th in the following cities: New York, San Francisco, London, Paris, Hong Kong, and Tokyo
Nonument is committed to not only recording but celebrating the 20th century’s most important non-monuments. Founded in 2011, the multidisciplinary artist and research collective has amassed a record of built spaces that stand, if barely; forgotten by time through decay, technological or political changes, Nonument is preserving them even as they fall out of favor in a changing 21st-century society. Rather than present “a glorified collection of obscurities” or focus purely on architectural styles, founders Neja Tomšič and Martin Bricelj Baraga seek to develop a deeper understanding of public space and art, and how politics shape these spaces in our world today. In partnership with Mapping & Archiving Public Spaces (MAPS) project, the collective has a goal of cataloging more than 120 forgotten sites around the globe and bring them back into the public eye. Created by the Museum of Transitory Art, MAPS shares many of the goals of Nonument: its mission “aims to identify, map and archive public spaces, architecture, and monuments which are part of our cultural heritage, but are not yet identified as such.” And that’s where Nonument began. NONUMENT01 was a response to the demolition of a Brutalist icon, the McKeldin Fountain in Baltimore. A decision made with limited public engagement or input, the fountain had been an important gathering point for protestors and creatives, and the visual centerpiece of McKeldin Square. Upon its removal in 2016, Lisa Moren, a professor of visual arts, enacted the first art installation of Nonument, debuting an augmented reality app that allowed users to recreate the fountain on their screens, and interact with memories like protest signs and koi fish to discover their stories. The app and its launch event at the site continued the legacy of the lost monument and its role within the city, setting a precedent for Nonuments of the future. The database is just one component of Nonument. Case studies on architectural theory and live art, and performance events like Moren’s, are also an integral part of the collective’s mission, making it more than just an encyclopedia of degrading buildings. While the act of listing the monuments breathes back a certain degree of life, critical discourse and real-life opportunities for interaction with the listed structures completes a circle of study and renegotiation with the space they occupy—aligning with the overarching goals of the group. From nuclear power plants in Austria to stone sculptures in Serbia, the database is set to become a comprehensive collection and research resource for the 20th century, and continue to unearth the stories that matter, and rewrite the rules for sustainable management of our cultural heritage.
The phrase “bring a project to life” is thrown around casually by creative types of all creeds, from industrial designers to conceptual painters—people whose daily lives involve intense engagement with communication tools that allow the ideas in their heads to exist in the physical world. Emerging technologies from 3D software to VR goggles have revolutionized the way that clients can experience a designer’s vision, and now, Hyperform, a new collaborative and data-driven design tool, allows the design industry to literally immerse themselves—digitally—within a working project, blown up via augmented reality technology to 1:1 scale. Hyperform comes from a Bjarke Ingels Group (BIG) collaboration with Squint/Opera, a creative digital studio, and UNStudio. These big-name studios believe that their immersive software will enable designers to make the best decisions for the project and the client much faster, as the interactive elements are closer to complete project visualization than anything we’ve seen yet. Jan Bunge, managing director at Squint/Opera, said, “Hyperform marks the first time we can feel and sense a spatial condition before it gets built.” Client and designer can walk around a project, experiencing its massing, spatial qualities, and materiality, and simply use hand gestures to edit, delete, and alter this type of digital file in real time before it’s too late or too expensive to make a change. In a concept film, the Hyperform user is depicted as a disembodied hand, the viewer’s own, pushing at virtual buttons suspended in space and scrolling through horizontal libraries of architectural drawings, 3D models, and plans. Selecting a model and blowing it up with verbal cues to immersive size, the user shares it with a life-size colleague who materializes in a pixelated form before our eyes, calling in and “ready to join the meeting.” BIG has debuted this new tool at its curated exhibition, FORMGIVING – An Architectural Future History from Big Bang to Singularity, at the Danish Architecture Center in Copenhagen. Amid the exhibition of 71 BIG projects currently on the drawing boards, representing the firm’s active proposals for the future, Hyperform exists towards the end of the exhibition's “timeline”—near the top of the staircase near “singularity”—as the software represents the step beyond perceiving mere reality, going beyond into creating new realities—digital ones.
Artist Simon Denny is digging into data as a landscape, unearthing the possibilities of extracting material both physical and informational in Mine, a show at the Australian museum Mona. The show has found itself a fitting setting at Tasmania’s iconoclastic museum, the privately-run brainchild of entrepreneur David Walsh, that is itself a winding maze of darkened corridors partially carved into the Triassic sandstone of the Berriedale peninsula. The mine-shaft feeling is only increased by the museum's new Nonda Katsalidis and Falk Peuser–designed underground extension—a level of subterranean spaces connected by circular stone tunnels with metal ribs that they’re calling Siloam. Denny, whose previous work has fixated on cryptocurrency, the dissolution of borders, and other complications of our increasingly computerized world, works in the space between the two meanings of mine—both the extraction of physical material, like rare earth metals and lithium necessary for our devices, and the data mining and mining for bitcoins which has increasingly clear environmental impact in the form of outsize carbon emissions and land use. Mine looks at technological shifts and their impact on the IRL environment, as well as the entanglements of colonization and economics that have propelled resource extraction and all its environmental impacts. Instead of a canary in a coal mine, Mine will feature an augmented reality version of the nearly-extinct King Island brown thornbill, which researchers have recently discovered in Tasmania outside of its normal habitat, living inside a 3D version of a patent diagram of an Amazon warehouse cage that’s in actuality been designed for the company’s notoriously overworked and underpaid human workers. On the walls, the bird is overlaid onto pages of the patent and the AR bird, whose habitat has been all but destroyed by industry, flits throughout the exhibition on visitors’ phones or on "The O,” the museum’s unusual electronic guide. The exhibition has been designed as a trade show-cum-board game, where various devices that extract resources from the land and from human labor are displayed on a giant version of Squatter, a classic Monopoly-style Australian board game about raising sheep. Another board game, called "Extractor," will act as exhibition catalogue. Figurative work from other artists who investigate work and automation will be displayed, including Li Lao’s 2012 Consumption, which recalls the artist’s own experience working for the manufacturer Foxconn, and Patricia Piccinini’s 2002 Game Boys Advanced. The curators Jarrod Rawlins and Emma Pike hope, taken together, these sculptures will evince a “metaphorical workforce.” Mine is on view through April 13, 2020.
For all the advances in technology over the past decade, the experience of curating and viewing museum shows has remained relatively unchanged. Even though digital archive systems exist and have certainly helped bring old institutions into the present, they have relatively little influence over the ways museum shows are designed and shared. The normal practice is more or less “old school” and even borderline “dysfunctional,” said Bika Rebek, principal of the New York and Vienna–based firm Some Place Studio. In fact, a survey she conducted early on found that many of the different software suites that museum professionals were using were major time sinks for their jobs. Fifty percent said they felt they were “wasting time” trying to fill in data or prepare presentations for design teams. To Rebek, this is very much an architectural problem, or at least a problem architects can solve. She has been working over the past two years, supported by NEW INC and the Knight Foundation, to develop Tools for Show, an interactive web-based application for designing and exploring exhibitions at various scales—from the level of a vitrine to a multi-floor museum. Leveraging her experiences as an architect, 3D graphics expert, and exhibition designer (she’s worked on major shows for the Met and Met Breuer, including the OMA-led design for the 2016 Costume Institute exhibition Manus x Machina), Rebek began developing a web-based application to enable exhibition designers and curators to collaborate, and to empower new ways of engaging with cultural material for users anywhere. Currently, institutions use many different gallery tools, she explained, which don’t necessarily interact and don’t usually let curators think spatially in a straightforward way. Tools for Show allows users to import all sorts of information and metadata from existing collection management software (or enter it anew), which is attached to artworks stored in a library that can then be dragged and dropped into a 3D environment at scale. Paintings and simple 3D shapes are automatically generated, though, for more complex forms where the image projected onto a form of a similar footprint isn’t enough, users could create their own models. For example, to produce the New Museum’s 2017 show Trigger: Gender as a Tool and a Weapon, Rebek rendered the space and included many of the basic furnishings unique to the museum. For other projects, like a test case with the Louvre's sculptures, she found free-to-use models and 3D scans online. Users can drag these objects across the 3D environments and access in-depth information about them with just a click. With quick visual results and Google Docs-style automatic updates for collaboration, Tools for Show could help not just replace more cumbersome content management systems, but endless emails too. Rebek sees Tools for Show as having many potential uses. It can be used to produce shows, allowing curators to collaboratively and easily design and re-design their exhibitions, and, after the show comes down it can serve as an archive. It can also be its own presentation system—not only allowing “visitors” from across the globe to see shows they might otherwise be unable to see, but also creating new interactive exhibitions or even just vitrines, something she’s been testing out with Miami’s Vizcaya Museum and Gardens. More than just making work easier for curators and designers, Tools for Show could possibly give a degree of curatorial power and play over to a broader audience. “[Tools for Show] could give all people the ability to curate their own show without any technical knowledge,” she explained. And, after all, you can't move around archival materials IRL, so why not on an iPad? While some of the curator-focused features of Tools for Show are in the testing phase, institutions can already request the new display tools like those shown at Vizcaya. Rebek, as a faculty member at Columbia University's Graduate School of Architecture, Planning, and Preservation, has also worked with students to use Tools for Show in conjunction with photogrammetry techniques in an effort to develop new display methods for otherwise inaccessible parts of the Intrepid Sea, Air, and Space Museum, a history and naval and aerospace museum located in a decommissioned aircraft carrier floating in the Hudson River. At a recent critique, museum curators were invited to see the students’ new proposals and explore the spatial visualizations of the museum through interactive 3D models, AR, VR, as well as in-browser and mobile tools that included all sorts of additional media and information.
R+D for the Built Environment, is sponsoring a 6-month, paid, off-site design fellowship program starting this summer. We're looking for four candidates in key R+D topic areas:
- Building material science
- 3D printing, robotics, AR/VR
- AI, machine learning, analytics, building intelligence
- Quality housing at a lower cost
- Building resiliency and sustainability
- Workplace optimization
- Adaptable environments
Now active in over 30 countries around the world, French startup Iconem is working to preserve global architectural and urban heritage one photograph at a time. Leveraging complex modeling algorithms, drone technology, cloud computing, and, increasingly, artificial intelligence (AI), the firm has documented major sites like Palmyra and Leptis Magna, producing digital versions of at-risk sites at resolutions never seen, and sharing their many-terabyte models with researchers and with the public in the form of exhibitions, augmented reality experiences, and 1:1 projection installations across the globe. AN spoke with founder and CEO Yves Ubelmann, a trained architect, and CFO Etienne Tellier, who also works closely on exhibition development, about Iconem’s work, technology, and plans for the future. The Architect's Newspaper: Tell me a bit about how Iconem got started and what you do. Yves Ubelmann: I founded Iconem six years ago. At the time I was an architect working in Afghanistan, in Pakistan, in Iran, in Syria. In the field, I was seeing the disappearance of archeological sites and I was concerned by that. I wanted to find a new way to record these sites and to preserve them even if the sites themselves might disappear in the future. The idea behind Iconem was to use new technology like drones and artificial intelligence, as well as more standard digital photography, in order to create a digital copy or model of the site along with partner researchers in these different countries. AN: You mentioned drones and AI; what technology are you using? YU: We have a partnership with a lab in France, the INRIA (Institut National de Recherche en Informatique/National Institute for Research in Computer Science and Automation). They discovered an algorithm that could transform a 2D picture into a 3D point cloud, which is a projection of every pixel of the picture into space. These points in the point cloud in turn reproduce the shape and the color of the environment, the building and so on. It takes billions of points that reproduce the complexity of a place in a photorealistic manner, but because the points are so tiny and so huge a number that you cannot see the point, but you see only the shape on the building in 3D. Etienne Tellier: The generic term for the technology that converts the big datasets of pictures into 3D models is photogrammetry. YU: Which is just one process. Even still, photogrammetry was invented more than 100 years ago…Before it was a manual process and we were only able to reproduce just a part of the wall or something like that. Big data processing has led us to be able to reproduce a huge part of the real environment. It’s a very new way of doing things. Just in the last two years, we’ve become able to make a copy of an entire city—like Mosul or Aleppo—something not even possible before. We also have a platform to manage this huge amount of data and we’re working with cloud computing. In the future we want to open this platform to the public. AN: All of this technology has already grown so quickly. What do you see coming next? YU: Drone technology is becoming more and more efficient. Drones will go farther and farther, because batteries last longer, so we can imagine documenting sites that are not accessible to us, because they're in a rebel zone, for example. Cameras also continue to become better and better. Today we can produce a model with one point for one millimeter and I think in the future we will be able to have ten points for one millimeter. That will enable us to see every detail of something like small writing on a stone. ET: Another possible evolution, and we are already beginning to see this happen thanks to artificial intelligence, is automatic recognition of what is shown by a 3D model. That's something you can already have with 2D pictures. There are algorithms that can analyze a 2D picture and say, "Oh okay, this is a cat. This is a car." Soon there will probably also be the same thing for 3D models, where algorithms will be able to detect the architectural components and features of your 3D model and say, "Okay, this is a Corinthian column. This dates back to the second century BC." And one of the technologies we are working on is the technology to create beautiful images from 3D models. We’ve had difficulties to overcome because our 3D models are huge. As Yves said before, they are composed of billions of points. And for the moment there is no 3D software available on the market that makes it possible to easily manipulate a very big 3D model in order to create computer-generated videos. So what we did is we created our own tool, where we don't have to lower the quality of our 3D models. We can keep the native resolution quality photorealism of our big 3D models, and create very beautiful videos from them that can be as big as a 32K and can be projected onto very big areas. There will be big developments in this field in the future. AN: Speaking of projections, what are your approaches to making your research accessible? Once you've preserved a site, how does it become something that people can experience, whether they're specialists or the public? YU: There are two ways to open this data to the public. The first way is producing digital exhibitions that people can see, which we are currently doing today for many institutions all over the world. The other way is to give access directly to the raw data, from which you can take measurements or investigate a detail of architecture. This platform is open to specialists, to the scientific community, to academics. The first exhibition we did was with the Louvre in Paris at the Grand Palais for an exhibition called Sites Éternels [Eternal Sites] where we projection mapped a huge box, 600 square meters [6,458 square feet], with 3D video. We were able to project monuments like the Damascus Mosque or Palmyra sites and the visitors are surrounded by it at a huge scale. The idea is to reproduce landscape, monuments, at scale of one to one so the visitor feels like they’re inside the sites. AN: So you could project one to one? ET: Yes, we can project one to one. For example, in the exhibition we participated to recently, in L'Institut du monde arabe in Paris, we presented four sites: Palmyra, Aleppo, Mosul, and Leptis Magna in Libya. And often the visitor could see the sites at a one to one scale. Leptis Magna was quite spectacular because people could see the columns at their exact size. It really increased the impact and emotional effect of the exhibition. All of this is very interesting from a cultural standpoint because you can create immersive experiences where the viewer can travel through a whole city. And they can discover not only the city as a whole but also the monuments and the architectural details. They can switch between different scales—the macro scale of a city; the more micro one of the monument; and then the very micro one of a detail—seamlessly. AN: What are you working on now? ET: Recently, we participated in an exhibition that was financed by Microsoft that was held in Paris, at the Musée des Plans-Reliefs, a museum that has replicas of the most important sites in France. They're 3D architectural replicas or maquettes that can be 3 meter [apx. 10 feet] wide that were commissioned by Louis XIV and created during the 17th century because he wanted to have replicas to prepare a defense in case of an invasion. Recently, Microsoft wanted to create an exhibition using augmented reality and they proposed making an experience in this museum in Paris, focusing on the replicas of Mont-Saint-Michel, the famous site in France. We 3D scanned this replica of Mont-Saint-Michel, and also 3D scanned the actual Mont-Saint-Michel, to create an augmented reality experience in partnership with another French startup. We made very precise 3D models of both sites—the replica and the real site—and used the 3D models to create the holograms that were embedded and superimposed. Through headsets visitors would see a hologram of water going up and surrounding the replica of Mont-Saint-Michel. You could see the digital and the physical, the interplay between the two. And you could also see the site as it was hundreds of years before. It was a whole new experience relying on augmented reality and we were really happy to take part in this experience. This exhibition should travel to Seattle soon.