The phrase “bring a project to life” is thrown around casually by creative types of all creeds, from industrial designers to conceptual painters—people whose daily lives involve intense engagement with communication tools that allow the ideas in their heads to exist in the physical world. Emerging technologies from 3D software to VR goggles have revolutionized the way that clients can experience a designer’s vision, and now, Hyperform, a new collaborative and data-driven design tool, allows the design industry to literally immerse themselves—digitally—within a working project, blown up via augmented reality technology to 1:1 scale. Hyperform comes from a Bjarke Ingels Group (BIG) collaboration with Squint/Opera, a creative digital studio, and UNStudio. These big-name studios believe that their immersive software will enable designers to make the best decisions for the project and the client much faster, as the interactive elements are closer to complete project visualization than anything we’ve seen yet. Jan Bunge, managing director at Squint/Opera, said, “Hyperform marks the first time we can feel and sense a spatial condition before it gets built.” Client and designer can walk around a project, experiencing its massing, spatial qualities, and materiality, and simply use hand gestures to edit, delete, and alter this type of digital file in real time before it’s too late or too expensive to make a change. In a concept film, the Hyperform user is depicted as a disembodied hand, the viewer’s own, pushing at virtual buttons suspended in space and scrolling through horizontal libraries of architectural drawings, 3D models, and plans. Selecting a model and blowing it up with verbal cues to immersive size, the user shares it with a life-size colleague who materializes in a pixelated form before our eyes, calling in and “ready to join the meeting.” BIG has debuted this new tool at its curated exhibition, FORMGIVING – An Architectural Future History from Big Bang to Singularity, at the Danish Architecture Center in Copenhagen. Amid the exhibition of 71 BIG projects currently on the drawing boards, representing the firm’s active proposals for the future, Hyperform exists towards the end of the exhibition's “timeline”—near the top of the staircase near “singularity”—as the software represents the step beyond perceiving mere reality, going beyond into creating new realities—digital ones.
Posts tagged with "Augmented Reality":
Artist Simon Denny is digging into data as a landscape, unearthing the possibilities of extracting material both physical and informational in Mine, a show at the Australian museum Mona. The show has found itself a fitting setting at Tasmania’s iconoclastic museum, the privately-run brainchild of entrepreneur David Walsh, that is itself a winding maze of darkened corridors partially carved into the Triassic sandstone of the Berriedale peninsula. The mine-shaft feeling is only increased by the museum's new Nonda Katsalidis and Falk Peuser–designed underground extension—a level of subterranean spaces connected by circular stone tunnels with metal ribs that they’re calling Siloam. Denny, whose previous work has fixated on cryptocurrency, the dissolution of borders, and other complications of our increasingly computerized world, works in the space between the two meanings of mine—both the extraction of physical material, like rare earth metals and lithium necessary for our devices, and the data mining and mining for bitcoins which has increasingly clear environmental impact in the form of outsize carbon emissions and land use. Mine looks at technological shifts and their impact on the IRL environment, as well as the entanglements of colonization and economics that have propelled resource extraction and all its environmental impacts. Instead of a canary in a coal mine, Mine will feature an augmented reality version of the nearly-extinct King Island brown thornbill, which researchers have recently discovered in Tasmania outside of its normal habitat, living inside a 3D version of a patent diagram of an Amazon warehouse cage that’s in actuality been designed for the company’s notoriously overworked and underpaid human workers. On the walls, the bird is overlaid onto pages of the patent and the AR bird, whose habitat has been all but destroyed by industry, flits throughout the exhibition on visitors’ phones or on "The O,” the museum’s unusual electronic guide. The exhibition has been designed as a trade show-cum-board game, where various devices that extract resources from the land and from human labor are displayed on a giant version of Squatter, a classic Monopoly-style Australian board game about raising sheep. Another board game, called "Extractor," will act as exhibition catalogue. Figurative work from other artists who investigate work and automation will be displayed, including Li Lao’s 2012 Consumption, which recalls the artist’s own experience working for the manufacturer Foxconn, and Patricia Piccinini’s 2002 Game Boys Advanced. The curators Jarrod Rawlins and Emma Pike hope, taken together, these sculptures will evince a “metaphorical workforce.” Mine is on view through April 13, 2020.
For all the advances in technology over the past decade, the experience of curating and viewing museum shows has remained relatively unchanged. Even though digital archive systems exist and have certainly helped bring old institutions into the present, they have relatively little influence over the ways museum shows are designed and shared. The normal practice is more or less “old school” and even borderline “dysfunctional,” said Bika Rebek, principal of the New York and Vienna–based firm Some Place Studio. In fact, a survey she conducted early on found that many of the different software suites that museum professionals were using were major time sinks for their jobs. Fifty percent said they felt they were “wasting time” trying to fill in data or prepare presentations for design teams. To Rebek, this is very much an architectural problem, or at least a problem architects can solve. She has been working over the past two years, supported by NEW INC and the Knight Foundation, to develop Tools for Show, an interactive web-based application for designing and exploring exhibitions at various scales—from the level of a vitrine to a multi-floor museum. Leveraging her experiences as an architect, 3D graphics expert, and exhibition designer (she’s worked on major shows for the Met and Met Breuer, including the OMA-led design for the 2016 Costume Institute exhibition Manus x Machina), Rebek began developing a web-based application to enable exhibition designers and curators to collaborate, and to empower new ways of engaging with cultural material for users anywhere. Currently, institutions use many different gallery tools, she explained, which don’t necessarily interact and don’t usually let curators think spatially in a straightforward way. Tools for Show allows users to import all sorts of information and metadata from existing collection management software (or enter it anew), which is attached to artworks stored in a library that can then be dragged and dropped into a 3D environment at scale. Paintings and simple 3D shapes are automatically generated, though, for more complex forms where the image projected onto a form of a similar footprint isn’t enough, users could create their own models. For example, to produce the New Museum’s 2017 show Trigger: Gender as a Tool and a Weapon, Rebek rendered the space and included many of the basic furnishings unique to the museum. For other projects, like a test case with the Louvre's sculptures, she found free-to-use models and 3D scans online. Users can drag these objects across the 3D environments and access in-depth information about them with just a click. With quick visual results and Google Docs-style automatic updates for collaboration, Tools for Show could help not just replace more cumbersome content management systems, but endless emails too. Rebek sees Tools for Show as having many potential uses. It can be used to produce shows, allowing curators to collaboratively and easily design and re-design their exhibitions, and, after the show comes down it can serve as an archive. It can also be its own presentation system—not only allowing “visitors” from across the globe to see shows they might otherwise be unable to see, but also creating new interactive exhibitions or even just vitrines, something she’s been testing out with Miami’s Vizcaya Museum and Gardens. More than just making work easier for curators and designers, Tools for Show could possibly give a degree of curatorial power and play over to a broader audience. “[Tools for Show] could give all people the ability to curate their own show without any technical knowledge,” she explained. And, after all, you can't move around archival materials IRL, so why not on an iPad? While some of the curator-focused features of Tools for Show are in the testing phase, institutions can already request the new display tools like those shown at Vizcaya. Rebek, as a faculty member at Columbia University's Graduate School of Architecture, Planning, and Preservation, has also worked with students to use Tools for Show in conjunction with photogrammetry techniques in an effort to develop new display methods for otherwise inaccessible parts of the Intrepid Sea, Air, and Space Museum, a history and naval and aerospace museum located in a decommissioned aircraft carrier floating in the Hudson River. At a recent critique, museum curators were invited to see the students’ new proposals and explore the spatial visualizations of the museum through interactive 3D models, AR, VR, as well as in-browser and mobile tools that included all sorts of additional media and information.
R+D for the Built Environment, is sponsoring a 6-month, paid, off-site design fellowship program starting this summer. We're looking for four candidates in key R+D topic areas:
- Building material science
- 3D printing, robotics, AR/VR
- AI, machine learning, analytics, building intelligence
- Quality housing at a lower cost
- Building resiliency and sustainability
- Workplace optimization
- Adaptable environments
Now active in over 30 countries around the world, French startup Iconem is working to preserve global architectural and urban heritage one photograph at a time. Leveraging complex modeling algorithms, drone technology, cloud computing, and, increasingly, artificial intelligence (AI), the firm has documented major sites like Palmyra and Leptis Magna, producing digital versions of at-risk sites at resolutions never seen, and sharing their many-terabyte models with researchers and with the public in the form of exhibitions, augmented reality experiences, and 1:1 projection installations across the globe. AN spoke with founder and CEO Yves Ubelmann, a trained architect, and CFO Etienne Tellier, who also works closely on exhibition development, about Iconem’s work, technology, and plans for the future. The Architect's Newspaper: Tell me a bit about how Iconem got started and what you do. Yves Ubelmann: I founded Iconem six years ago. At the time I was an architect working in Afghanistan, in Pakistan, in Iran, in Syria. In the field, I was seeing the disappearance of archeological sites and I was concerned by that. I wanted to find a new way to record these sites and to preserve them even if the sites themselves might disappear in the future. The idea behind Iconem was to use new technology like drones and artificial intelligence, as well as more standard digital photography, in order to create a digital copy or model of the site along with partner researchers in these different countries. AN: You mentioned drones and AI; what technology are you using? YU: We have a partnership with a lab in France, the INRIA (Institut National de Recherche en Informatique/National Institute for Research in Computer Science and Automation). They discovered an algorithm that could transform a 2D picture into a 3D point cloud, which is a projection of every pixel of the picture into space. These points in the point cloud in turn reproduce the shape and the color of the environment, the building and so on. It takes billions of points that reproduce the complexity of a place in a photorealistic manner, but because the points are so tiny and so huge a number that you cannot see the point, but you see only the shape on the building in 3D. Etienne Tellier: The generic term for the technology that converts the big datasets of pictures into 3D models is photogrammetry. YU: Which is just one process. Even still, photogrammetry was invented more than 100 years ago…Before it was a manual process and we were only able to reproduce just a part of the wall or something like that. Big data processing has led us to be able to reproduce a huge part of the real environment. It’s a very new way of doing things. Just in the last two years, we’ve become able to make a copy of an entire city—like Mosul or Aleppo—something not even possible before. We also have a platform to manage this huge amount of data and we’re working with cloud computing. In the future we want to open this platform to the public. AN: All of this technology has already grown so quickly. What do you see coming next? YU: Drone technology is becoming more and more efficient. Drones will go farther and farther, because batteries last longer, so we can imagine documenting sites that are not accessible to us, because they're in a rebel zone, for example. Cameras also continue to become better and better. Today we can produce a model with one point for one millimeter and I think in the future we will be able to have ten points for one millimeter. That will enable us to see every detail of something like small writing on a stone. ET: Another possible evolution, and we are already beginning to see this happen thanks to artificial intelligence, is automatic recognition of what is shown by a 3D model. That's something you can already have with 2D pictures. There are algorithms that can analyze a 2D picture and say, "Oh okay, this is a cat. This is a car." Soon there will probably also be the same thing for 3D models, where algorithms will be able to detect the architectural components and features of your 3D model and say, "Okay, this is a Corinthian column. This dates back to the second century BC." And one of the technologies we are working on is the technology to create beautiful images from 3D models. We’ve had difficulties to overcome because our 3D models are huge. As Yves said before, they are composed of billions of points. And for the moment there is no 3D software available on the market that makes it possible to easily manipulate a very big 3D model in order to create computer-generated videos. So what we did is we created our own tool, where we don't have to lower the quality of our 3D models. We can keep the native resolution quality photorealism of our big 3D models, and create very beautiful videos from them that can be as big as a 32K and can be projected onto very big areas. There will be big developments in this field in the future. AN: Speaking of projections, what are your approaches to making your research accessible? Once you've preserved a site, how does it become something that people can experience, whether they're specialists or the public? YU: There are two ways to open this data to the public. The first way is producing digital exhibitions that people can see, which we are currently doing today for many institutions all over the world. The other way is to give access directly to the raw data, from which you can take measurements or investigate a detail of architecture. This platform is open to specialists, to the scientific community, to academics. The first exhibition we did was with the Louvre in Paris at the Grand Palais for an exhibition called Sites Éternels [Eternal Sites] where we projection mapped a huge box, 600 square meters [6,458 square feet], with 3D video. We were able to project monuments like the Damascus Mosque or Palmyra sites and the visitors are surrounded by it at a huge scale. The idea is to reproduce landscape, monuments, at scale of one to one so the visitor feels like they’re inside the sites. AN: So you could project one to one? ET: Yes, we can project one to one. For example, in the exhibition we participated to recently, in L'Institut du monde arabe in Paris, we presented four sites: Palmyra, Aleppo, Mosul, and Leptis Magna in Libya. And often the visitor could see the sites at a one to one scale. Leptis Magna was quite spectacular because people could see the columns at their exact size. It really increased the impact and emotional effect of the exhibition. All of this is very interesting from a cultural standpoint because you can create immersive experiences where the viewer can travel through a whole city. And they can discover not only the city as a whole but also the monuments and the architectural details. They can switch between different scales—the macro scale of a city; the more micro one of the monument; and then the very micro one of a detail—seamlessly. AN: What are you working on now? ET: Recently, we participated in an exhibition that was financed by Microsoft that was held in Paris, at the Musée des Plans-Reliefs, a museum that has replicas of the most important sites in France. They're 3D architectural replicas or maquettes that can be 3 meter [apx. 10 feet] wide that were commissioned by Louis XIV and created during the 17th century because he wanted to have replicas to prepare a defense in case of an invasion. Recently, Microsoft wanted to create an exhibition using augmented reality and they proposed making an experience in this museum in Paris, focusing on the replicas of Mont-Saint-Michel, the famous site in France. We 3D scanned this replica of Mont-Saint-Michel, and also 3D scanned the actual Mont-Saint-Michel, to create an augmented reality experience in partnership with another French startup. We made very precise 3D models of both sites—the replica and the real site—and used the 3D models to create the holograms that were embedded and superimposed. Through headsets visitors would see a hologram of water going up and surrounding the replica of Mont-Saint-Michel. You could see the digital and the physical, the interplay between the two. And you could also see the site as it was hundreds of years before. It was a whole new experience relying on augmented reality and we were really happy to take part in this experience. This exhibition should travel to Seattle soon.
This past year was a big one for Morpholio, whose app Trace was named an Apple “App of the Day” and a top app in education, design, and drawing. Now, with iOS 12’s updated augmented reality (AR) functionality, the company is pushing the iPad’s architectural sketching abilities further, and closer to (almost) real space. The app was always intended to blend the benefits of sketching and CAD, giving people the option to quickly work things out by hand and blend the benefits of paper and digital tools, but now Trace is hoping to disrupt another common practice: taping and staking off rooms. “People have been pacing out plans as long as they've been drawing them. The palpable sense of scale, dimension, and extent simply can't be communicated with stills or even animation,” said Mark Collins, Morpholio cofounder. However, now, leveraging Apple’s ARKit 2, Morpholio has developed a tool they’re calling ARSketchWalk, which allows Morpholio Trace users to project their drawings at scale onto what’s captured through an iPad’s camera, as well as to extrude walls and other shapes by touch. Collins went on to say: “The AR experience gives you a real sense of how your space will feel and lets you decide if it works for you.” Morpholio cofounder Anna Kenoff said not only has the company been seeing people putting it to the test on clear sites and even on Manhattan office floors, but also using it to design smaller components like mullions or doorknobs to get a sense of and adjust scale right at their desks or on site. “Scale is hard to perceive in drawing if you’re not a pro, or even in a 3D model, which our field has learned over the years, can be deceiving when it comes to real scale,” Kenoff explained, adding that that AR allows an easier way to grasp relationships between spaces with the added ability to see drawings come to life—a quick way to test and iterate ideas that combines the benefits of traditional sketches with the added precision of accurate scale and perspective. The beta release has also been popular with landscape architects, reported Kenoff. The app also takes advantage of another new Apple feature: the ability to experience AR environments simultaneously and from different vantages across multiple devices. “Design is one part idea and two parts convincing others that your idea is the right one,” said Kennoff. The idea, then, is letting other designers, project partners, and clients get the shared experience of moving through the speculative spaces (secondary users can use their phones, with no need for an iPad Pro). Morpholio also expanded the app’s responsiveness to new Apple Pencil gestures and Kenoff said that the new pencil design has been a boon, allowing Morpholio engineers to build software for designers to draw things “only a hand” can. While apps like Morpholio Trace still can’t entirely do away with powerful desktop applications, the increasing computing power of mobile devices paired with the ability to use them on site is offering new avenues for use in the design process and in communicating with clients. “We’re just beginning to harness that technology and see how designers might use it; these are the early phases of that exploration,” said Kenoff. She went on to say, “we see Trace as just one piece of a much larger infrastructure; designers need different tools at different phases of design, but they need different levels of precision at every phase, something Trace can offer.”
London’s Serpentine Galleries are going high-tech for its 2019 summer installation. Together with Google Arts & Culture and Serpentine trustee David Adjaye, the arts institution is soliciting augmented reality (AR) architecture proposals to run alongside the 2019 Serpentine Pavilion this summer. Serpentine Augmented Architecture is taking entries from all over the world until February 25. According to the brief, applicants are expected to “propose imaginary city spaces and speculations on the built environment to be developed and experienced” in AR onsite at the galleries. The jury, composed of writers, curators, designers, technologists, and architects, is evaluating proposals on three different categories: How can the city be augmented or reinvented through AR? How can AR recontextualize our spatial relationships, given that the challenges and limitations of designing in the physical world are nonexistent in digital realities? Finally, each submission needs to be at least somewhat site-specific, as the Serpentine Gallery sits within an ecologically-sensitive park and welcomes up to 12 million visitors a year. Most importantly, because AR is in a relatively nascent stage, despite the tech being readily available to anyone with a smartphone, the boundaries and rules for its use have yet to be written. Any creative uses of augmented reality, including those that evolve over time, will be accepted. Entrants who are selected for the two-stage competition’s shortlist will move ahead to the second round and will be given a stipend of $1,000 to further develop their idea. After that, one winning proposal will be realized on the gallery’s grounds in July, and the winning team will receive approximately $3,800 for travel and accommodation expenses. Interested in applying? The full guidelines can be found here. The Serpentine Galleries are leaning heavily into tech this year, and Marina Abramovic will be staging her own AR installation in the main gallery space from February 19 through 24. While the architect of the 2019 Serpentine Pavilion hasn’t been announced yet, that information should become available any day now, giving applicants a better idea of what their work will be shown alongside.
The streets of Stockholm are still littered with the fading colors of campaign posters from an undecided election. From the pink backdrop of feminist candidates, to progressive reds, conservative blues, and the struggling greens, to the slippery, slick gradients of the neo-nationalists. The politics of the public sphere have been unavoidable lately in Sweden, but a couple of heavy rains are slowly turning what’s left behind back into grey pulp. Meanwhile, Value in the Virtual, an exhibit by London-based Space Popular, blossoms with a polychrome array of signs, patterns, and symbols. The show has just opened at Sweden’s national center for architecture and design, ArkDes, under new its director Kieran Long. A new feature of the museum is a smaller gallery space—Boxen, a steel box designed by local practice Dehlin Brattgård—inside one of the two main exhibition halls. It is intended for fast-paced architecture shows curated by former ArchDaily editor James Taylor-Foster, and Value in the Virtual is the first display of work by a contemporary design practice in the new setup. The Space Popular duo, Lara Lesmes and Fredrik Hellberg, have grabbed the opportunity to build what they describe as their manifesto. It consists of six "typical" Stockholm environments—a downtown apartment, a vintage-styled coworking café, an exclusive nightlife district, a banquet hall, an iconic subway station, and a royal park—retrofitted with an added layer of augmented reality. They are rendered in 1:1-scale elevations printed on tapestries and hung from scaffolding in cornered-off spaces. Each corner is a mash-up with "walls" from two or three locations. Entering the exhibition space, you are invited to take your shoes off (to experience the printed carpets on the floors), and once inside, to put on a pair of virtual reality goggles. They are a window into Voxen, a parallel version of the same gallery space produced for Sansar, a social virtual reality platform. During the press preview, an online visitor had already found his way there for a peek. The avatar, dressed in a black bodysuit and a Daft Punk motorcycle helmet, showed up out of nowhere, mumbled a distorted "nice to meet you," and soon disappeared. In this realm, you get 3-D views of some of Space Popular’s scenarios: one is a version of Stockholm where public art is updated by the minute; in another the allegorical wall mosaics of the Nobel Prize venue are adjusted to tout recent scientific achievements; and in another a selective nightclub bouncer might actually let you in after all, if you upgrade to a nicer-looking "skin." For a casual visitor to be able to follow the narratives, though, some additional guidance would be welcome. Instead, the dense 3-D visuals are trusted to speak for themselves. An argument underlying Value in the Virtual is that architects could, and should, engage with the potential digital depth of architectural surfaces, as well as what comes with it: cognitive psychology, color theory, and the techniques to manipulate it. The exhibit seems to say, "Get on with it and design your own digital tapestry for the scaffolding that is, for lack of a better word, the real world. And if you don’t," the argument is, "somebody else will." And when augmented reality hardware moves from "a drawer on your face" to a casual pair of glasses, virtual dressing of space, and the market for virtual real estate, could become a big deal. “What is the role of the architect and designer in this transition?” a section of the wall text asks, when designers are “not bound by a requirement to shelter" nor “to the various physical limitations” we are used to? Either way, many architects are already engaged. Architecture school graduates, especially in the United States, are increasingly recruited by software companies before they even contemplate a career as licensed master-builders. Architecture is already part of the $137.9-billion gaming industry market, but perhaps not in familiar ways. The exhibition wants to look at what is going on while augmented reality is still in its infancy and to figure out the consequences of the tech before it is sitting on everyone’s face. Will visitors get it? Do the chosen scenarios illustrate what is actually claimed to be at stake? What is the real promise here? What is the threat? This stuff is touched upon in the exhibition brochure but not very evident in the exhibit itself. And if the argument is made more clearly in writing in Space Popular’s ten-point programmatic declaration, what does the gallery display add, except a VR demo and an institutional seal of approval? Maybe that’s enough. It’s a start to the conversation Space Popular wants us to have. Stockholm is normally very neat. Putting up posters is prohibited, and the rules are generally abided by. Political campaign material is the exception, and come election season, the city is carpeted with colorful platitudes and slogans. Public space is already a kind of virtual real estate, a moderated social medium that is regulated, traded, and hacked. It always has been, you could argue. Sure, the visual exploitation of it is augmented by new technologies, but the question, then, is if that augmentation does something that didn't happen before. The politics of augmentation are not that different to the conflicts of interest, power plays, and negotiations of spatial politics already being debated. A peephole into another world can make you forget where you are standing at the moment. Sometimes, a mirror would perhaps be more useful. Space Popular: Value in the Virtual, curated by James Taylor-Foster, is on view at ArkDes, Sweden's national center for architecture and design in Stockholm through November 18, 2018.
Apple has wrapped up its keynote at the 2018 Worldwide Developers Conference (WWDC) and announced several big changes coming with IOS 12 that should shake things up for architects and designers. The biggest announcements focused on VR and augmented reality (AR), as Apple unveiled ARKit 2.0. With a presentation backed by a constellation of big names such Lego, Adobe, Autodesk and more, the sequel to Apple’s original AR infrastructure promised to bring a much more cohesive AR ecosystem to consumers. For architects and interior designers, Apple’s promise of easily digitizing and later placing real-world objects in an AR environment could have a huge impact on envisioning what a space will look like. If ARKit 2.0 succeeds, it could be used to decorate a room or to put different architectural options (literally) on the table without having to build physical iterative models. A collaborative, locally-hosted AR experience was also shown off at the keynote, with two players using their iPads to knock down wooden towers Angry Birds-style. One complaint about the original ARKit was the fragmented ecosystem and inability to interact with AR objects across apps. For IOS 12, Apple has partnered with Pixar to develop a new proprietary file format for 3D models, USDZ (Universal Scene Description Zip). USDZ files should be openable across all IOS devices, including Macs, and Adobe is promising native Creative Cloud support. The introduction of a streamlined system for sharing and examining 3D objects in the real world, and to create persistent AR experiences in specific locations, likely means enhanced functionality for apps like Morpholio once the new IOS rolls out. For those looking for more practical applications of the technology, Apple is also expanding its list of native AR apps. Coders who have measuring tools on the App Store likely won’t be happy with Measure, Apple’s in-house AR solution that allows users to snap a picture of reference objects and then tap to measure anything else. Apple's senior vice president of Software Engineering Craig Federighi took to the stage and used Measure to seamlessly calculate the length of a trunk, then his own baby photo. IOS 12 will get a public release when the latest iPhone model rolls out in September of this year.
In his panel presentation, “Enhanced Realities and Immersive Experiences,” at the upcoming Tech+ conference on May 22 in New York City, co-presenter Chris Mayer, chief innovation officer at Suffolk Construction, will discuss how innovative technologies like AR and VR are changing the industry. But chasing the latest “shiny, new object” isn’t what the talk—or even the Tech+ expo—is all about. “It’s not a lack of available technology solutions that could potentially be deployed,” he said. “It’s the idea about how do you focus on delivering value, not just delivering change, and how do you make sure that communication and collaboration are effective in supporting the organization?” Mayer has worked at Suffolk Construction for the past three years, applying what he learned during his 30-year tenure in print media at The Boston Globe to effectively integrate people, process, and technology. In an industry that experienced significant disruption, Mayer says the challenges and solutions employed, along with "focusing on not just bright, shiny objects and technology, but how to create value out of that and the necessary steps of engaging people in the organization, and putting processes in place, were all potentially applicable” to the construction industry where he landed.
Creating value across the boardData drives innovation, and value creation must remain front and center of the process. Otherwise, organizations risk going out of business if they focus solely on technology for technology’s sake, he says. “Part of the way we want to make sure we’re driving that [value] consistently in the innovation that we’re supporting is, ultimately, 'the juice is worth the squeeze.' You want to find something that’s going to matter.” And what counts isn’t just making incremental improvements to specific functions within the design and construction industry, which Mayer observes is highly distributed. Rather, it’s about taking a holistic view and rethinking the entire process end-to-end, he says. “I think we are at a point in time where looking both at opportunities vertically—which is traditionally where I think people would focus on productivity gains—but also looking horizontally at the entire value chain of the construction process from the initial touch point with the owner, through the design, through the financing stages, through the planning and optimization stages, through the execution and construction control periods into the warranty and the quality control process,” he explained. “The idea about how to scale innovation is a big opportunity for us in the industry.”
The right tool for the jobTo that end, Suffolk Construction has created “smart labs” internally that serve as sort of operations control towers where technologies can be invented, tested, implemented, and scaled. Utilizing a series of interconnected screens on what Suffolk calls "huddle walls," the team at Suffolk can simultaneously view project information and financial reports or engage in lean strategies and design planning throughout the lifecycle of a project. The labs also feature a virtual reality "cave" with head-mounted displays in which projects can be viewed individually or as a group. Naturally, Mayer doesn’t see immersive technology as a replacement for existing communication tools, but rather as an alternative. “When I look at virtual reality or augmented reality… I see those as additional options to add value to the conversation,” he said. “Because some information is going to be most effectively delivered in sort of static form on an iPad, but some of it is going to be much more effective if you put on a headset and engage with it.” Whatever is most effective—whether it’s a 3-D model, a 2-D model, or shop drawings—is the tool that should be used, he adds. At the end of the day, that’s what adds the most value.
This year’s Tech+ conference—an upcoming and groundbreaking event showcasing technological innovators in the AEC industry taking place on May 22 in New York City—will feature pioneering speakers that are rethinking existing technological paradigms. Among them is Iffat Mai, practice application development leader for Perkins + Will, who will be co-presenting a discussion about enhanced realities and immersive experiences. As a self-described technology geek, Mai is excited about the fact that the design and construction industry, which has traditionally lagged behind the times in terms of adopting new technology, is finally showing signs of receptivity. “What I’ve seen is a shift in some people’s attitude, of designers and project teams, who are very open-minded about accepting these new technologies and integrating them into their workflow and process,” she said. Mai notes that a number of software companies are making virtual reality (VR) and augmented reality (AR) platforms more compatible with existing design tools that allows for greater integration and efficiency. “I’m really happy to see that’s happening in all different levels in our industry.”
It’s all about communicationMai’s enthusiasm for change stems from her belief that these new tools are improving communication and client engagement—an assessment that’s been tested in practice at Perkins+Will and the results of which she’ll share during her presentation at Tech+. “I think VR/AR is the ideal communication tool for the AEC industry,” Mai said. “As architects, communication of design is the bread and butter of our business.” Noting that many clients aren’t particularly adept at visualization, Mai suggests that 3-D technology can help them better understand not only how a design looks, but also gain a better sense of scale and how the space will actually feel. Oftentimes, clients look at drawings and say they understand them, but are surprised when a space is built because they don’t conceptualize the same way design practitioners do. Mixed reality solves the problem in many ways. “We’ve been implementing all these new technologies into our everyday design process and really looking to engage our stakeholders and our clients, and offer them the opportunity to be fully engaged in the design process,” Mai explained. “It’s not just giving them nice little drawings; we really put them into an immersive environment and encourage them to evaluate things by really understanding what the design is about so that, in the end, I think that the clients are a lot more comfortable and happy with the final product.”
Overcoming barriers to innovationAs a result, Mai says VR and AR technologies are streamlining the design and review process, saving both time and money. With the cost of hardware and software dropping, she suggests the barrier to entry will be lowered, especially to smaller firms that currently may not be able to afford them. Ultimately, wide-scale adoption of mixed reality technology boils down to two things, according to Mai: fear of change, and a company-wide commitment to innovation. “If you can get over the fear of changing and have kind of long-term sight of the future and not be afraid of changing, that’s a critical component of innovation,” she said. “And then your company leaders have to be really promoting company-wide innovation, to have people just think out of the box and looking for new ways of doing things in every aspect of the company.” [vimeo 261011445 w=640 h=360] TECH+ Expo from Architect's Newspaper on Vimeo.
Brooklyn-based artist Artie Vierkant melds photography, commercial printing, and sculpture. For Vierkant, there is no real hierarchical distinction between art experienced in real life or in a photograph. In Vierkant’s post-digital world, 2-D and 3-D, image and object, original and copy, exist on a level playing field. He often takes photos of his installation, modifies them, and repurposes them as new works in and of themselves. Most recently, this has taken the form of an augmented reality (AR) app, Image Object (a term Vierkant termed in 2010 to describe work which “exist[s] somewhere between physical sculptures and altered documentation images”), released in conjunction with his exhibition Rooms greet people by name at Galerie Perrotin on the Lower East Side. Vierkant sat down with the Architect’s Newspaper to discuss the app, the boundaries between 2-D and 3-D, and what augmented reality means for the future of public space. https://www.instagram.com/p/Bfq5LJuhF_9/?taken-by=avierkant Architect's Newspaper: Along with your exhibition Rooms greet people by name at Galerie Perrotin in New York, you’ve released an app, Image Object. Can you give us a little background on the app? Artie Vierkant: The app is functionally just a camera app. Part of the idea is for people to take photos and videos with it. For me, that’s just another extension of the work. It started because I realized augmented reality platforms are basically what I do already with the installation view editing in a very simple way. I started playing around with those tools and then realizing that if I applied the same principles as what happened in the installation views, AR could essentially create the exact same aesthetic experience, but perceptually in space, where you could wander through and around the work. It takes the reproduction in a photo on my phone or online or in a book and renders it spatially. AN: Like much of your work, this app troubles the boundary of 2-D and 3-D, perhaps taking it even further than what you've done before. If you're using the Image Object app in the exhibition, there’s a simultaneity between the arts instantiation in the gallery space and in the flat space of your screen. Artie Vierkant: I’m always quite serious about these points of intersection. AN: But, on the other hand, the app works anywhere. You can attend the exhibition without going to the gallery. Artie Vierkant: Or make your own. AN: I was reading somewhere that you said digital space was “susceptible to modification.” How might we think of physical space along these lines? One of the big questions for AR is where the boundaries between physical spatial experience and digital experience are. And if those boundaries even matter. Artie Vierkant: I think it’s becoming increasingly obvious that those distinctions and boundaries don't really matter. So many of these technologies are just prosthetics that extend our regular perceived lived reality. The idea of having two totally separate realms has been long ago debunked. We’re living in an incredibly weird time. AN: The idea of being “post-internet” presented the notion that the internet became so pervasive it just was simply the background of our world, not some special, fetishizable thing apart from it. It infiltrated existence. Now everything is passively designed to accommodate our use of the internet, to accommodate computation—possibly without even conscious thought. What sort of spaces and designs can we imagine as AR becomes increasingly high-level and increasingly entrenched? Artie Vierkant: This is what would maybe be the really interesting possibilities with AR. For example, buildings are designed knowing that a certain type of use is going to be predominant within them, but AR actually creates a situation where for the first time you could actually introduce a completely other layer. You could say that that's not totally the case because many things you could physically make, you could waste a bunch of material and resources doing it though. AN: The gallery is almost a perfect metaphor for this though because it is always claiming to be an empty, evacuated space. Artie Vierkant: The white cube will not stop trying to attest that it is a neutral zone or a completely empty space of limitless possibility. Which is obviously false. AN: AR takes that notion of emptiness and asks why bother filling it with stuff when you can fill it digitally. In this way, the gallery is the most extreme test case for the relationship between physical space and AR. Another interesting problem of AR is its relation to the private/public in space. The gallery is a space we occupy with others, but the app’s view of it is limited to our own phones. What does AR mean for public space and togetherness in physical space? Artie Vierkant: I have my own assumptions about how a lot of these technologies will play out and continue to be developed. I don't think that the more utopian options are going to play out in the short term, frankly. Clearly one of the issues that we have right now with "digital space” is its relationship to corporations. Social media space is basically just comprised of huge advertising companies. You could imagine a more egalitarian version of this under capitalism where you're both selling you're both your resources, where you're actively selling your attention and being remunerated for it. AN: Still, that remains capitalism as such, if a less extreme variety of it. Even the more radical proposals that have existed have quickly turned into commercial tools; if late capitalism is good at anything, it's rapidly subsuming anything that was initially meant to oppose it. Artie Vierkant: But, outside of an individualized experience which the market is predisposed to, you could also imagine a very strange reality that would be produced by having an augmented reality share a collectivized experience where you have a separate life over everything or different spaces where you could go into, like a white cube, and could load up some stuff that people have left there. Rooms greet people by name will be on view at Galerie Perrotin until April 8. Image Object will remain downloadable from the Apple App Store. Artie Vierkant: Rooms greet people by name Galerie Perrotin 130 Orchard Street New York, NY Through April 8