Posts tagged with "Virtual Reality":

Placeholder Alt Text

AN talks to Eyal Weizman about tech in truth-telling ahead of Forensic Architecture’s first U.S. survey

Forensic Architecture has garnered a significant reputation within the field of architecture (they had a major showing at the most recent Chicago Architecture Biennial) and beyond for their work reconstructing violent events perpetrated by state actors and others using architectural tools and emerging technologies. The collective’s work has been displayed everywhere from the courthouse to major art exhibitions, including during this past year’s Whitney Biennial. The video One Building, One Bomb, co-produced with The New York Times, won an Emmy this past year, and in 2018 they were also nominated for the United Kingdom’s prestigious Turner Prize. This month, Forensic Architecture, which is based out of Goldsmiths, University of London, will have its first major U.S. survey; Forensic Architecture: True to Scale will open on February 20 at the Museum of Art and Design at Miami Dade College. Ahead of the Miami exhibition, AN spoke with Forensic Architecture founder Eyal Weizman to discuss the changes of the past decade, the power of technology, and the importance of forensics in a “post-truth” era. Drew Zeiba: Forensic architecture began a decade ago. How has the project changed and how have the tools you use evolved since then? Eyal Weizman: When we started around 2010, it was the beginning of the Arab Spring and the really heartbreaking civil war that came in its wake. Those particular sets of conflicts had a particular texture to them. They happened in an environment that had a lot of mobile phones and in the areas where there's internet connectivity, and where the government’s ability to shut down the internet was not always successful. We started being in an environment where increasingly you had more and more videos around incidents that we could map. It was also the early teens where at the time, in London great protest around tuition fees and then the big protest after the police killing of Mark Duggan in North London. This killing was during a period when police did not yet have dash cams. And ever since, we've seen the introduction of body cams and dash cams to police investigations. If you look today at the conflicts that are taking place, we have several thousand videos, hours long, broadcasting live as things are happening. The sheer media density requires us to use different technologies in order to bring accountability. We have recently developed machine vision and machine learning technologies that, working together with human researcher, can speed up the process of sieving through thousands and thousands and thousands of hours of content coming from confrontations with policing Hong Kong, for example. In relation to police violence, we have now concluded the investigation in Chicago [into the police killing of Harith Augustus] with full body cams available, several dash cams, a CCTV, etc. We are working in a much more media-saturated environment and need new tools like artificial intelligence to help us identify materials like our work on Warren Kanders that used machine learning. [Kanders is the ousted vice chairman of the board of the Whitney Museum, whose company Safariland sells tear gas used at the U.S.-Mexico border, in Gaza, and elsewhere, including in U.S. cities such as Baltimore and Ferguson, Missouri.] We're creating virtual reality sites for witnesses to walk through the scene of the together with the psychologists and lawyer and protection, but can also recall events. And we are trying to be at cutting edge of technologies that would help social movement and civil society to invert the balance of epistemological power, against the monopoly of knowledge that states have over information in battlefields and in crime scenes. The abundance of images also has to do with the increasing presence of surveillance—including by CCTV cameras and police body cams, as you mention. How can architectural and technological tools invert the power relationship embedded in some of these commonplace image-making tools? Forensics have to be in the hands of the people. Forensics was developed as a state tools, as a form of state power, as a police tool. But when the police is the agency that dispenses violence and the agency that's investigating it, we have a problem. We absolutely need to be able to have independent groups holding police to account. And what we have is our creativity and we can effectively mobilize and make more of much fewer bits of data and image, because we're working aesthetically and we work socially with those independent groups in producing evidence. We socialize the production of evidence, we make it a collective social practice that involves the communities that are experiencing state violence continuously. At the same time, Forensic Architecture often works in places where there is seemingly a limited amount of the evidence or data that investigators typically rely upon, or with evidence that is biased. Police body cams show the officer’s perspective only, for example. Your work is coming at a time that people are describing as “post-truth.” How does the work of Forensic Architecture fit in to this political context? The very nature of what we call investigative aesthetics is based on working with weak signals and with partial data. You need to fill that gap with a relation between those points you have, sort of like stars in a dark sky. You see very few dots and we need to actually see how they can support the probability of something to have occurred. And any investigative work that comes from the point of view of civil society is both about demolishing and building. So we need to use our training as critical scholars in deconstructing police statements, or military statements taken by secret services or the government—and we need to take those ruins, those scattered bits of media flotsam that exist and build something else with it. There’s always demolition and rebuilding that takes place. That is very structural to our work. Right now, the mistrust in public institutions in the political sphere, In the so-called post-truth era, that trust is not being replaced. Those that tell us not to believe anymore in science and in think tanks and in experts are not building a new epistemology in its stead. They're simply demolishing it. Rhetoric replaces verification. What we do similarly to them is we are questioning state given truths. We are attacking those temples of power and knowledge, but we attempt to replace them with a much more imminent form of evidence production that socializes the production of that evidence.
Placeholder Alt Text

Architects apply the latest in fabrication, design, and visualization to age-old timber

Every so often, the field of architecture is presented with what is hailed as the next “miracle building material.” Concrete enabled the expansion of the Roman Empire, steel densified cities to previously unthinkable heights, and plastic reconstituted the architectural interior and the building economy along with it.  But it would be reasonable to question why and how, in the 21st century, timber was accorded a miracle status on the tail-end of a timeline several millennia long. Though its rough-hewn surface and the puzzle-like assembly it engenders might seem antithetical to the current global demand for exponential building development, it is timber’s durability, renewability, and capacity for sequestering carbon—rather than release it—that inspires the building industry to heavily invest in its future.  Cross-laminated timber (CLT), a highly resilient form of engineered wood made by gluing layers of solid-sawn lumber together, was first developed in Europe in the early 1990s, yet the product was not commonly used until the 2000s and was only introduced into the International Building Code in 2015. While mid-to-large range firms around the world have been in competition to build the largest or the tallest timber structures to demonstrate its comparability to concrete and steel, a number of independent practitioners have been applying the latest methods of fabrication, computational design techniques, and visualization software to the primordial material. Here, AN exhibits a cross-section of the experimental work currently being pursued with the belief that timber can be for the future what concrete, steel, and plastic have been in the past. AnnaLisa Meyboom In the Fall of 2018, 15 of professor AnnaLisa Meyboom’s students at the University of British Columbia (UBC), along with David Correa at University of Waterloo, Oliver David Krieg of Intelligent City, and 22 industry participants designed and constructed the third annual Wander Wood Pavilion, a twisting, latticed timber structure made up entirely of non-identical components.  By taking advantage of the advanced fabrication resources available at the UBC Centre for Advanced Wood Processing, including a CNC mill and an multi-axis industrial robot, the project was both a learning opportunity for its design team and a demonstration to a broader public that timber is a more than viable material to which contemporary fabrication technologies can be applied. The pavilion forms a bench on one end that's large enough for two people, a public invitation test the structure's strength and durability for themselves. While the pavilion only required three days to fabricate and assemble on-site, a significant amount of time and energy was spent ensuring its quick assembly when the time came. A rigorous design workflow was established that balanced an iterative design process with rapid geometric output that accounted for logical assembly sequencing. Every piece of the pavilion was then milled to interlock into place and be further secured by metal rivets. The project was devised in part to teach students one strategy for narrowing the gap between digital design and physical fabrication while applying a novel material. In this vein, a standard industrial robot was used throughout the fabrication process that was then “set up with an integrator specifically to work on wood,” according to Meyboom. Gilles Retsin While Gilles Retsin, the London-based architect and professor at the Bartlett School of Architecture, has long experimented with both computational design and novel methods of fabrication, a recent focus on timber has propelled his practice into a bold new direction. A giant wooden structure installed at London’s Royal Academy in early 2019, for instance, was the architect’s first attempt at applying augmented reality to modular timber construction through the use of Microsoft’s Hololens. “We used AR to send instructions directly from the digital model to the team working on-site,” Retsin explained. “AR therefore helps us understand what a fully-automated construction process would look like, where a digital model communicates directly with people and robots on site.” In a recent international competition set in Nuremberg, Germany, Retsin set his sights on a much larger scale for what would have been the world’s first robotically prefabricated timber concert hall. Designed in collaboration with architect Stephan Markus Albrecht, engineering consultancy Bollinger-Grohmann, and climate engineers Transsolar and acoustic specialists Theatre Projects, the proposal takes advantage of the site’s location in a region with an abundance of timber while envisioning the material’s application to a uniquely challenging building type. The building’s form exhibits the material’s lightness using 30-foot sawtooth CLT prefabricated modules over the main lobby spaces, which are exposed from the exterior thanks to a seamless glass envelope.  “Designing in timber not only means a more sustainable future, but also has architects profoundly redesigning buildings from the ground up,” said Retsin. “It’s a challenging creative task, we’re really questioning the fundamental parts, the building blocks of architecture again.”  Casey Rehm For SCI-Arc professor Casey Rehm, working with timber has meant challenging many issues in the field of architecture at once. Timber is a rarely-considered building material in Los Angeles given the high time and material costs associated with its transportation and manufacturing. “Right now,” Rehm said, “the industry is manually laying up two-by-sixes into industrial presses, pressing them into panels, and then manually cutting window openings.” But if timber waste itself was adopted as a building material, he argued, the material could be far more globally cost-efficient.  While timber has been used in the construction of increasingly large structures around the world, such as multistory housing developments and office buildings, Rehm believes the material can be reasonably adapted to a smaller scale for quick deployment. In this vein, Rehm has been researching strategies with his students for producing inexpensive CLT panels for the construction of homeless housing and accessory dwelling units in Los Angeles, a city with a particularly conspicuous housing shortage.  But aside from its potential as a cost and material-efficient material, the architect has applied timber to even his most exploratory design work. NN_House 1, a sprawling single-floor home Rehm proposed in 2018 for the desert plains of Joshua Tree, California, was designed in part using a 3D neural network to develop ambiguous divisions between rooms, as well as to blur the divide between interior and exterior. The AI was trained on the work of modernist architects—while producing idiosyncrasies of its own—to develop a living space with multiple spatial readings. Kivi Sotamaa As an architect practicing in Finland, Kivi Sotamaa is certainly not unique in his community for his admiration of the far-reaching possibilities of timber construction. He is, however, producing novel research into its application at a domestic scale to reimagine how wood can be used as a primary material for home construction. The Meteorite, a three-story home the architect has designed near Helsinki constructed entirely of locally-grown CLT, was designed using an organizational strategy the architect has nicknamed ‘the misfit.’ This system, as Sotamaa defines it, creates two distinct formal systems to generate room-sized interstitial spaces that simultaneously act as insulation, storage space, and housing for the building’s technical systems. “Aesthetically,” Sotamaa elaborated, “the misfit strategy allows for the creation of a large scale monolithic form on the outside, which addresses the scale of the forest, and an intricate human-scale spatial arrangement on the interior.” Altogether, the architect estimates, the home’s CLT slabs have sequestered 59,488 kilograms, or roughly 65 tons, of carbon dioxide from the atmosphere. The Meteorite was developed and introduced to the client using virtual reality, and Sotamaa hopes to apply other visualization technologies to the design and production of timber architecture, including augmented reality that could allow builders to view assembly instructions in real-time on site. “When the pieces are in order on-site and [with clear] instructions,” Sotamaa explained, “the assembly of the three-dimensional puzzle can happen swiftly and efficiently, saving energy and resources when compared with conventional construction processes.” 
Placeholder Alt Text

Morphosis’s Kerenza Harris talks tech and integration

Kerenza Harris is the director of design technology at Morphosis, where she works across the firm to integrate advanced computational techniques and high-tech simulations throughout the design process. Ahead of her presentation on system-based design processes and extended reality at TECH+ in Los Angeles next week, AN caught up with Harris to get her takes on prototyping, parametricism, virtual reality, and more. On going from the screen, to prototype, to facade: Kerenza Harris: We work in a highly iterative process. We go over a form or design element again and again and again, almost on a loop, and we're trying to use the new forms in reference to other models and they're linked parametrically, meaning that there's a knowledge from the shape itself of what it is, where it is, and what its role is playing. For example, when we created those modules (those little white forms, or “pillows,” as we call them) for the facade of the Kolon One & Only Tower in Seoul, South Korea, we had to start with the results of the study of light, views. and solar exposure. So the pillows are instantiated in a digital model, as a T-shaped object informed by the performance requirements of these three factors, and then this three-dimensional thing must also have a thickness, so we have to take into account structural demands as well, which we were able to achieve with a monocoque system.  But the key thing is that, from the moment of inception, this piece will continue to exist and evolve throughout the project. We're trying to avoid erasing or redoing anything—instead, we're creating a smart element that has an identity and certain characteristics and which will continue to develop throughout the project. This intelligence will influence how the piece modulates itself, when we start inputting certain performance requirements or material characteristics. So it moves forward throughout the project; it's part of a process of loops that also includes hand sketching, 2D drawing, simulation, analysis, 3D printing, and digital model making.  In the case of the Kolon project, we created a physical, full-size prototype of the facade element. What we were trying to accomplish had never been done in our desired material before, in fiberglass. We had to find a fabricator, get into a relationship with that fabricator, find out how they fabricate the thing in the first place, learn the properties of the materials, composite mix, and so on. We got involved and we built a one-to-one version of this thing. On how a systems-focused approach can shape how architects work: Instead of thinking about design as the creation of separate components—such as rooms, doors, facade pieces, toilets and windows, and so forth, we're taking a step back, and trying to understand projects in terms of organizational systems and workflows. Each of these systems has a behavior and a certain way that they interact with each other. Understanding components in terms of broader systems, we can globalize a workflow—for example, creating rules for certain systems or object classes, instead of applying meaning to individuals elements, in a sense. Once you establish the system, the pieces are very powerful, and they work on a local scale or a global scale. They can work on urban master plan design or they can work in the design of a chair. It's really efficient, but also a little tricky because it introduces order but then at the same time may produce disorder you wouldn’t otherwise encounter dealing with objects individually. Things may emerge from these systems that were unanticipated. When you push the number of systems or components to the maximum, and their interaction becomes more and more complex, you may find yourself with new, emergent conditions that you were not planning or designing for. And that's actually what we're looking for, what we’re really interested in: something akin to the unexpected conditions of a city that’s developed over a long period of time.  On virtual reality: Four years ago we were commissioned to transform a suite of hotel rooms at the Therme Vals resort in Switzerland. The existing rooms were very small, but within each we wanted to fit a freestanding, curved glass shower as a kind of light sculpture in the center.  But we were struggling with the models for this project. It was quite difficult, from the digital model and scaled 3D-printed studies, to really assess the height of the table and certain things and how they would be used and navigated by guests, especially because it was all custom-made furniture, custom-made spaces in a very tight area. And so we built a movie set, almost. We used foam core, and someone went in and actually modeled one-to-one the hotel room using tape and glue so that we could actually stand in the space. It was alright, for a project of that scale—but I immediately thought, "Okay, we need to find another way because this doesn't quite work." We needed a way of inhabiting our spaces during design that would be easier, faster, more integrated with our workflow. So I got interested in VR. The headsets on the market were still clunky then. But we purchased one for the office to try it out, and it immediately made a difference. That development coincided with the beginning of the new Orange County Museum of Art design. In addition to having the typical concerns of an art museum regarding sight-lines and lighting, the building has complex geometry and a big atrium skylight above the entrance. The broader team and project stakeholders were struggling sometimes to understand how the spaces worked because it was hard to experience from the plan or computer screen. And the renderings were strong, but they still couldn’t really capture the feeling of it. We started putting people in there in VR. We put the designers in, too. VR just gives you a completely different perspective on the work that you do. And it's also the first time that you can see your project at a one-to-one scale without spending millions of dollars to actually build it. And we’re getting to the point where this immersion can be immediately accessed. Now, in the Dassault Systèmes 3DEXPERIENCE / CATIA parametric software that we use, you can just go in the model with your headset, in real-time. With this platform, you don’t need to render it or use any other software. I have a feeling this will be the next real game-changer for the industry. For more on the latest in AEC technology and for information about the upcoming TECH+ conference, visit https://techplusexpo.com/events/la/
Placeholder Alt Text

Luisa Caldas uses AR to let DS+R's BAMPFA tell its own story

Luisa Caldas is a professor of architecture at the University of California, Berkeley, where she leads the XR Lab, focused on using augmented reality (AR), virtual reality, and other extended reality tools as part of architectural practice. Recently, Caldas created the Augmented Time exhibition at the Berkeley Art Museum and Pacific Film Archive (BAMPFA), housed in a 2016 Diller Scofidio + Renfro-designed building in Berkeley, California. The exhibition used iPad-based augmented reality and physical artifacts to allow the narratives of the building—originally opened in 1940—and those who built it to shine through. AN spoke to Caldas about augmented storytelling, the narrative power of architecture, and what “extended reality” could mean to architects in the future. Drew Zeiba: What was the initial inspiration behind Augmented Time? Luisa Caldas: I was intrigued by the potential of AR to tell a story. I wanted to show a number of interwoven realities that I saw happening in this particular piece of architecture. The building was the Berkeley Printing Press, which was later abandoned and covered in graffiti, before becoming a museum designed by Diller Scofidio + Renfro. So, I saw the potential for a timeline kind of storytelling that would be engaging because the building itself was to become its own storyteller. You could embed all this multi-modal digital information that was captured in so many places and just have it congregated on the building itself. The other motivation was to show the workers that actually built the building. I wanted to make visible those faces and those stories that, as an architect who has built buildings, I know are there. Often, all these dramas, all this magic about putting something together, completely fades away and/or is told as the work of an architect. The people who build it actually kind of disappear.  I’m really interested in the relation between this powerful new technology to tell invisible or forgotten stories. Not just as a tool.  I think one of the things this project touches on his how AR could shape how we think about built history, and not only frame discussions of the history of a building, but even question what “preservation” and site-specificity mean in a post-digital age.  Totally, because a lot of the preliminary work that architects do on sites has to do with precedent, has to do with history, has to ask “What is there? How did it come to be there?” We architects always tend to do that research, but it just becomes another invisibility, unless there is a very clear reference in the building design about site context or historical context. And so it becomes our first conceptual stages, our first approaches to the site, to the building, to the program, but it just usually vanishes away. I enjoy asking how process captures or preserves or ignores or incorporates or shows that history, that resonance of the site. For me, that was very fascinating, how to embody that enquiry in this AR experience.  It also shows the potential for AR as a tool for experiencing buildings and the built world as things that don’t just exist in a single moment, but unfold over time. Exactly, which is such a part of human narratives, isn’t it? And it’s so many times built by layering things over one another. So, being able to peel those layers away, to turn the skin into a derma. You know, a skin is a surface, but a derma is a layered reality. That was also the idea: peeling the visible surface away and revealing those layers.  Can you tell me a little more about the technical aspects of the project and the process of realizing it?  I lead a lab of virtual and augmented reality so there was initially a discussion: “Should we have AR headsets or should we have handheld devices?” And headsets were, at the time at least and even today, not really up to what we wanted to do. Also, I like the more democratic access to the experience that the handheld device provides you. We developed the app for iPads, but we can have the app for a smartphone, so anyone can access AR, like you do popular Snapchat filters. This is a project that had to be done in augmented reality, not virtual reality, because it had to be related to the physical artifact of building.  There was a lot of interaction with the museum about visitor access, about how to make invisible things appear in a museum. When you get to a museum you expect to see things, right? And there you want to view was not available. You have to get these devices and you have to understand where to go. That led us to a lot of research on what is called user interface and user experience (UI/UX). We had to invent this new way of showing an exhibition, and to understand how people related to the content and to the technology, and so we did two or three previews where we open the exhibit and we were there seeing what people did and how they used it in a fluid, public event.  Of course, I had a lot of students coming up to try it in the lab, but it is very different how tech savvy students and how seniors or kids use it, for example. We saw all these people using the technology and we learned from it, and we kept refining the UI/UX. We had to create everything from scratch, really, there wasn’t a precedent—we basically invented it.  In terms of the technical solution, we decided to go for the Apple platform. As Apple was releasing more of its technology, we were constantly adapting to what was being made possible, to create more and more ambitious projects. Computer science at Berkeley is excellent. So I had a large team of computer scientists, architects, and also UI/UX designers, and the level of integration was very high. We met every week. Everyone was bringing ideas to the table, everybody was super excited. So there was a big integration between the creative side and the technical side. The technologists and computer scientists could come up with a really creative solution, or the architects or designers could suggest something to the computer scientists that they were not expecting. I think the team was very committed and we knew we were breaking new ground so, it was a lot of fun.  After closing at the museum, BAMPFA AR — Augmented Time reopened at the Wurster Hall Room 108 gallery at UC Berkeley, where it will be on display until January 30. It will later travel to other locations around the country. For more on the latest in AEC technology and for information about the upcoming TECH+ conference, visit https://techplusexpo.com/events/la/
Placeholder Alt Text

New Museum and Onassis USA will launch a mixed reality lab in Leong Leong–designed space

The New Museum’s NEW INC and Onassis USA, the American outpost of the Greek arts organization, have announced a new joint venture focused on mixed reality projects. Called ONX Studio (for Onassis, NEW INC eXtended Reality Studio), the project will begin as a two-year pilot program and will function as an accelerator, workspace, and gallery located in a 4,000-square-foot space in Midtown’s Olympic Tower, in a space being redesigned by Leong Leong ONX Studio has in part grown out of projects by NEW INC members and the challenges they’ve posed. “One of the thrilling things around NEW INC is that mixed reality has organically become a huge area of focus for the members,” explained Karen Wong, deputy director of the New Museum and cofounder of NEW INC, noting that many past residents, working with AR and VR, have found success at forums such as Sundance, South by Southwest, and the Tribeca Film Festival. However, mixed reality is new, and festivals, museums, and galleries are still exploring how to best incorporate it into their programming “Mixed reality is an area that’s growing by leaps and bounds but there’s no bespoke spaces in New York for this artist working with it,” said Wong. The new Leong Leong–designed space is being built specifically for year-long residents to experiment and create in, as well as to provide a platform to exhibit and share their work. Christopher Leong described ONX Studio as a “hybrid space,” one that blends its roles as both workspace and exhibition space. It will be focused around a large room that acts as an “immersive toolbox.” Secondary spaces, such as an acoustically-isolated exhibition space, as well as basics like kitchens and conference space will flank the center room, which is lined by an acoustic curtain. Furniture will be flexible, creating a kind of "cast of characters," that can be relocated throughout the studio. A theatrical grid of outlets, tracks, lighting, and other technological infrastructure will be built-in into the space, allowing for a flexible use of the studio, which could also be further subdivided or opened up. “The hope is that it’s open-ended in the way that it can be used,” explained Leong, “whether it’s for recording bodies in space with volumetric capture, as an artist's studio, or as a place to exhibit projections or sound pieces or mixed reality live performances. Our goal was to create an infrastructure that could support artists in many ways. We wanted to create a sense that the space could be transformational.”  Wong noted that she saw the partnership with Onassis as especially compelling given the international organization’s penchant for commissioning radical theatrical works, and for their underway development of a program in Greece that shares sympathies with NEW INC, the Onassis Lab. ONX Studio plans to announce its initial dozen residents and open this spring. The artists—including previous NEW INC alumni—will spend a year developing mixed reality projects to be exhibited during a month-long showcase next winter. The program is being overseen by Wong along with NEW INC director Stephanie Pereira, Onassis USA artistic and executive director Vallejo Gantner, and the Onassis Foundation’s head of digital and innovation Prodromos Tsiavos.
Placeholder Alt Text

Unity creates new open source tool just for architects with Reflect

Video game software suites like Unreal Engine and Unity have made their way into the architectural arsenal with AEC firms like Skanska, Foster + Partners, and Zaha Hadid Architects using them to visualize and test new buildings. However, these tools weren’t necessarily built with AEC professionals in mind and while they often result in nice-looking environments, they don’t generally offer much in the way of architecture-specific functionality like the ones architectural designers have come to rely upon in BIM and CAD software. To help bridge this gap, the company behind Unity is testing a new piece of software called Reflect. “Unity Pro is a super powerful tool that people use it for creating design walkthroughs and custom application development,” said Tim McDonough, vice president at Unity, “but these firms have a whole bunch of people that would like to be able to view their Revit data easily in a 3D engine like Unity without having to be a software developer, which is what are our current tools built for.” Reflect, which will launch publicly this fall, connects with existing software suites like Revit and Trimble to leverage the vast amounts of data that designers and contractors rely upon, and uses it to create new visualizations, simulations, AR, and VR experiences. Users can view and collaborate across BIM software and Reflect, which are synchronized in real-time across multiple devices for both desktop and mobile. “Users were saying it took them weeks to get data out of Revit into Unity and by the time they got it out, the project had moved on and what was done was irrelevant,’” said McDonough. “We’ve taken out the drudgery so that now what used to take weeks takes just minutes.” https://youtu.be/YnwcGfr0Uk0 A number of firms have already been putting Reflect to the test. Reflect is open source and allows users to develop their own applications, whether for use in their firm or for a broader architectural public. SHoP Architects has been trying out Reflect since the software entered its Alpha phase this summer, creating various solutions to test on their supertall project at 9 Dekalb Avenue in Brooklyn. Adam Chernick‌, an associate at SHoP focusing on AR and VR research, noted that while showing off buildings in software like Unity has become part of standard practice, getting those visualizations attached to critical information has been a challenge up until now. “It hasn't been super difficult to get the geometry into the game engines," he said, "but what has been even more difficult is getting that data into the game engines." One of the first uses for Reflect that the SHoP team devised was an AR application that allowed them to monitor the progress of 9 Dekalb and easily oversee construction sequencing using color-coded panels that map onto the building’s model in their office. Chernick explained that there was a huge amount of exterior window panels to keep track of and that the app really helped. “We wanted to be able to visualize where we are in the construction process from anywhere—whether in VR or AR, and be able to get a live update of its status,” he said. “Now we can watch the building being constructed in real-time.” The SHoP team has also leveraged the power of Reflect—and its integration with Unity—to create new visualization tools for acoustic modeling. “We created an immersive acoustic simulator where you get to see how a sound wave expands through space, reflects off of walls, and interacts with geometry,” said Christopher Morse‌, an associate of interactive visualization at SHoP. “You can slow it down, you can pause it, and you can stop it.” The idea, he explained, is to help architects make acoustic decisions earlier in the design process. “Currently a lot of those acoustic decisions come later and most of the geometry is already decided,” Morse said, noting that at a certain point, all designers can really do is add carpeting or acoustic tiling. “But we want to use these tools earlier and in order for that to actually work, we needed to enable an iterative feedback loop so that you can create a design, analyze and evaluate it, and then make changes based on your analysis." With Reflect, there's also no more grueling import and export process, which Morse said prevented designers from even incorporating tools in their workflow. “Once we had Reflect, we integrated it into our existing acoustic visualization software in order to make that round trip quicker so that people can put on the headset, make a change in Revit, and instantly reevaluate based on those changes.” There is also metadata attached to the geometry, such as material information. While 9 Dekalb is too far along in its construction to incorporate the new software heavily into the design, SHoP’s begun testing out their acoustic modeling app in the lobby of the project. https://youtu.be/f0IA55N_99o Reflect could also provide BIM data in more a user-friendly package to more people working on building projects. “We think that BIM is so valuable, but not enough people get to use it,” said McDonough. “We were trying to figure out how to get BIM in the hands of people on a construction site, so everyone can see all that information at a human scale.” At SHoP, this means creating apps that contractors can use on the job. Currently, their AR apps work on mobile devices, but SHoP hopes that, as AR headsets become more mainstream, they’ll also be able to use the apps on products such as the HoloLens. “This could be a paradigm shift,” says Chernick‌. “We realize that this massive, thousand-sheet set of construction documents that we need to create in order to get a building built is not going anywhere soon. But what we can do is help make this process more efficient and help our construction teams understand and potentially build these projects in more efficient ways.”
Placeholder Alt Text

Aesthetic of Prosthetics compares computer-enhanced design practices

How has contemporary architecture and culture been shaped by our access to digital tools, technologies, and computational devices? This was the central question of Aesthetics of Prosthetics, the Pratt Institute Department of Architecture’s first alumni-run exhibition curated by recent alumni and current students Ceren Arslan, Alican Taylan, Can Imamoglu and Irmak Ciftci. The exhibition, which closed last week, took place at Siegel Gallery in Brooklyn. The curatorial team, made up of current students and recent alumni, staged an open call for submissions that addressed the ubiquity of “prosthetic intelligence” in how we interact with and design the built environment. “We define prosthetic intelligence as any device or tool that enhances our mental environment as opposed to our physical environment," read the curatorial statement. "Here is the simplest everyday example: When at a restaurant with friends, you reach out to your smartphone to do an online search for a reference to further the conversation, you use prosthetic intelligence." As none of the works shown have actually been built, the pieces experimented with the possibilities for representation and fabrication that “prosthetic intelligence” allows. The selected submissions used a range of technologies and methods including photography, digital collage, AI technology, digital modeling, and virtual reality The abundant access to data and its role in shaping architecture and aesthetics was a pervasive theme among the show's participants. Ceren Arslan's Los Angeles, for instance, used photo collage and editing to compile internet-sourced images that create an imaginary, yet believable streetscape. Others speculated about data visualization when drawings are increasingly expected to be read by not only humans, but machines and AI intelligence, as in Brandon Wetzel's deep data drawing.

"The work shown at the exhibition, rather than serving as a speculative criticism pointing out towards a techno-fetishist paradigm, tries to act as recording device to capture a moment in architectural discourse. Both the excitement and skepticism around the presented methodologies are due to the fact that they are yet to come to fruition as built projects," said the curators in a statement. 

Placeholder Alt Text

Morpholio brings Board software to desktop with expanded pro-features and VR

Morpholio, the architect-turned-developer-run company known for its Trace app that blends augmented reality, digital hand drafting, and other architectural tools on portable devices, has brought its interior design program, Board, to desktops for the first time.  Coming on the heels of the new Mac Catalina operating system update, the desktop version of Board leverages the new MacCatalyst developer tool which allows for translating iOS apps to desktop more simply.  Board, which is intended to apply a mood-board logic to technical interior design problems, has been designed for not only professionals but to make home design easier for average consumers. That said, with Board for Mac, Morpholio hopes to “take advantage of the unique properties of the desktop environment," says Morpholio co-founder Mark Collins in a press release from the company, “which is essential for professional work.” The desktop app will include mood board “super tools,” such as layer control and magic wand selection and deletion, as well as a feature called “Ava,” which creates spec sheets for clients and contractors. Ava gives automatic suggestions to match color and forms, and libraries of products from larger companies like Herman Miller and Knoll and smaller designers like Eskayel. It will also include new export features and provide further compatibility with Adobe and Autodesk products (as well as Pinterest). In addition, while Board for mobile already has AR features that allow for furniture to be placed in space at scale, the desktop version will allow for VR integration. “A typical furniture catalog would rely on still images,” says Morpholio co-founder Anna Kenoff, “but Board allows you experience expertly rendered models, created by the storytellers at Theia Interactive. You can view and spin around your favorite furniture pieces and experience them in every dimension. You can zoom in to stitching and materiality and feel the shade and shadows on their forms.” Additional viewing and presentation features will be built in as well and Board will take full advantage of Catalina’s updated Dark Mode for those who prefer to use it. When Apple released MacCatalyst, they definitely had creative professionals in mind,” says Kenoff of the recent Apple release. “They wanted to amplify the power of mobile apps by combining them with the precision capable on a Mac. Few architects and designers work exclusively on a laptop, desktop or tablet. We hope to make our apps available wherever designers are working.”
Placeholder Alt Text

You can paint unbuilt Michael Graves projects in VR

The late Michael Graves has seen his previously unbuilt work finally realized in a virtual reality environment thanks to Imagined Landscapes. The interactive sightseeing experience was created by Kilograph, a Los Angeles creative studio that has worked with firms like Gensler and Zaha Hadid Architects. Based on Graves’s painted plans for the unbuilt Canary Islands resort Barranco de Veneguera, originally planned in 1999, Imagined Landscapes creates allows users to go on an impressionistic, watercolor-esque romp through the resort, and through the act of drawing itself. Each area of the resort begins just as an outline, with users able to take a virtual paintbrush up to fill in the sketches. “The more you let people use their hands, the more connected they’ll feel to the world around them,” said Runze Zhang, a Kilograph VR designer, in a press release. Barranco de Veneguera was meant to be a sprawling resort for 12,000 people running down a three-and-a-half mile valley all the way to the ocean. Graves imagined two greywater-irrigated golf courses as a green ribbon across the valley, a dense “town center” on the coast, and terraced hotels, all made to reflect and use the region’s topography; however, the resort never came to fruition. For the most part, Imagined Landscapes was developed in Unreal Engine 4, but the watercolor effect is a proprietary development by Kilograph. To get a natural look, the team layered elements like displacement maps, world position information, and post-processing effects together, creating a visual that mirrored Graves’s colors and style. Gesture controls were then created using Leap Motion, a hand-tracking hardware sensor, to produce an experience tailored to our natural instincts around movement and painting and to make the interaction feel more authentic. You can download Imagined Landscapes directly from Kilograph’s site or try it out October 2nd at the WUHO Gallery at Woodbury University in Hollywood, California.
Placeholder Alt Text

Apple and New Museum team up for choreographed urban AR art tours

New York's New Museum, which has already launched a fair share of tech-forward initiatives like net-art preservation and theorization platform Rhizome and NEW INC, has teamed up with Apple over the past year-and-a-half to create a new augmented reality (AR) program called [AR]T. New Museum director Lisa Phillips and artistic director Massimiliano Gioni selected artists Nick Cave, Nathalie Djurberg and Hans Berg, Cao Fei, John Giorno, Carsten Höller, and Pipilotti Rist to create new installations that display the artistic potential of AR and help advance the museum’s own mixed-reality strategy. Each of the artists will create interactive AR artworks that can be viewed via iPhones with the [AR]T app on “choreographed” street tours that will begin in a limited number of Apple stores across six cities. Users will be able to capture the mixed reality installations in photos and video through their phones. Additionally, Nick Cave has created an AR installation titled Amass that can be viewed in any Apple store, and the company has worked with artist and educator Sarah Rothberg to help develop programs to initiate beginners into developing their own AR experiences. This announcement comes on the heels of much industry AR and VR speculation regarding Apple, in part encouraged by recent hires from the gaming industry, like that of Xbox co-creator Nat Brown, previously a VR engineer at Valve. While some artists, institutions, and architects have embraced AR and VR, many remain skeptical of the technology, and not just on artistic grounds. Writing in the Observer, journalist Helen Holmes wonders if “Apple wants the public to engage with their augmented reality lab because they want to learn as much about their consumers as possible, including and especially how we express ourselves creatively when given new tools.” The [AR]T app will drop on August 10th in the following cities: New York, San Francisco, London, Paris, Hong Kong, and Tokyo
Placeholder Alt Text

How can new technologies make construction safer?

Construction remains one of the most dangerous careers in the United States. To stop accidents before they happen, construction companies are turning to emerging technologies to improve workplace safety—from virtual reality, drone photography, IoT-connected tools, and machine learning. That said, some solutions come with the looming specter of workplace surveillance in the name of safety, with all of the Black Mirror-esque possibilities. The Boston-based construction company Suffolk has turned to artificial intelligence to try and make construction safer. Suffolk has been collaborating with computer vision company Smartvid.io to create a digital watchdog of sorts that uses a deep-learning algorithm and workplace images to flag dangerous situations and workers engaging in hazardous behavior, like failing to wear safety equipment or working too close to machinery. Suffolk’s even managed to get some of their smaller competitors to join them in data sharing, a mutually beneficial arrangement since machine learning systems require so much example data; something that's harder for smaller operations to gather. Suffolk hopes to use this decade’s worth of aggregated information, as well as scheduling data, reports, and info from IoT sensors to create predictive algorithms that will help prevent injuries and accidents before they happen and increase productivity. Newer startups are also entering the AEC AI fray, including three supported by URBAN-X. The bi-coastal Versatile Natures is billing itself as the "world's first onsite data-provider," aiming to transform construction sites with sensors that allow managers to proactively make decisions. Buildstream is embedding equipment and construction machinery to make them communicative, and, by focusing on people instead, Contextere is claiming that their use of the IoT will connect different members of the workforce. At the Florida-based firm Haskell, instead of just using surveillance on the job site, they’re addressing the problem before construction workers even get into the field. While videos and quizzes are one way to train employees, Haskell saw the potential for interactive technologies to really boost employee training in a safe context, using virtual reality. In the search for VR systems that might suit their needs, Haskell discovered no extant solutions were well-suited to the particulars of construction. Along with their venture capital spinoff, Dysruptek, they partnered with software engineering and game design students at Kennesaw State University in Georgia to develop the Hazard Elimination/Risk Oversight program, or HERO, relying on software like Revit and Unity. The video game-like program places users into a job site, derived from images taken by drone and 360-degree cameras at a Florida wastewater treatment plant that Haskell built, and evaluates a trainee’s performance and ability to follow safety protocols in an ever-changing environment. At the Skanska USA, where 360-degree photography, laser scanning, drones, and even virtual reality are becoming increasingly commonplace, employees are realizing the potentials of these new technologies not just for improved efficiency and accuracy in design and construction, but for overall job site safety. Albert Zulps, Skanska’s Regional Director, Virtual Design and Construction, says that the tech goes beyond BIM and design uses, and actively helps avoid accidents. “Having models and being able to plan virtually and communicate is really important,” Zulps explained, noting that in AEC industries, BIM and models are now pretty much universally trusted, but the increased accuracy of capture technologies is making them even more accurate—adapting them to not just predictions, but the realities of the site. “For safety, you can use those models to really clearly plan your daily tasks. You build virtually before you actually build, and then foresee some of the things you might not have if you didn't have that luxury.” Like Suffolk, Skanska has partnered with Smartvid.io to help them process data. As technology continues to evolve, the ever-growing construction industry will hopefully be not just more cost-efficient, but safer overall.
Placeholder Alt Text

Architect creates app to change how exhibitions are designed

For all the advances in technology over the past decade, the experience of curating and viewing museum shows has remained relatively unchanged. Even though digital archive systems exist and have certainly helped bring old institutions into the present, they have relatively little influence over the ways museum shows are designed and shared. The normal practice is more or less “old school” and even borderline “dysfunctional,” said Bika Rebek, principal of the New York and Vienna–based firm Some Place Studio. In fact, a survey she conducted early on found that many of the different software suites that museum professionals were using were major time sinks for their jobs. Fifty percent said they felt they were “wasting time” trying to fill in data or prepare presentations for design teams. To Rebek, this is very much an architectural problem, or at least a problem architects can solve. She has been working over the past two years, supported by NEW INC and the Knight Foundation, to develop Tools for Show, an interactive web-based application for designing and exploring exhibitions at various scales—from the level of a vitrine to a multi-floor museum. Leveraging her experiences as an architect, 3D graphics expert, and exhibition designer (she’s worked on major shows for the Met and Met Breuer, including the OMA-led design for the 2016 Costume Institute exhibition Manus x Machina), Rebek began developing a web-based application to enable exhibition designers and curators to collaborate, and to empower new ways of engaging with cultural material for users anywhere. Currently, institutions use many different gallery tools, she explained, which don’t necessarily interact and don’t usually let curators think spatially in a straightforward way. Tools for Show allows users to import all sorts of information and metadata from existing collection management software (or enter it anew), which is attached to artworks stored in a library that can then be dragged and dropped into a 3D environment at scale. Paintings and simple 3D shapes are automatically generated, though, for more complex forms where the image projected onto a form of a similar footprint isn’t enough, users could create their own models.  For example, to produce the New Museum’s 2017 show Trigger: Gender as a Tool and a Weapon, Rebek rendered the space and included many of the basic furnishings unique to the museum. For other projects, like a test case with the Louvre's sculptures, she found free-to-use models and 3D scans online. Users can drag these objects across the 3D environments and access in-depth information about them with just a click. With quick visual results and Google Docs-style automatic updates for collaboration, Tools for Show could help not just replace more cumbersome content management systems, but endless emails too. Rebek sees Tools for Show as having many potential uses. It can be used to produce shows, allowing curators to collaboratively and easily design and re-design their exhibitions, and, after the show comes down it can serve as an archive. It can also be its own presentation system—not only allowing “visitors” from across the globe to see shows they might otherwise be unable to see, but also creating new interactive exhibitions or even just vitrines, something she’s been testing out with Miami’s Vizcaya Museum and Gardens. More than just making work easier for curators and designers, Tools for Show could possibly give a degree of curatorial power and play over to a broader audience. “[Tools for Show] could give all people the ability to curate their own show without any technical knowledge,” she explained. And, after all, you can't move around archival materials IRL, so why not on an iPad? While some of the curator-focused features of Tools for Show are in the testing phase, institutions can already request the new display tools like those shown at Vizcaya. Rebek, as a faculty member at Columbia University's Graduate School of Architecture, Planning, and Preservation, has also worked with students to use Tools for Show in conjunction with photogrammetry techniques in an effort to develop new display methods for otherwise inaccessible parts of the Intrepid Sea, Air, and Space Museum, a history and naval and aerospace museum located in a decommissioned aircraft carrier floating in the Hudson River. At a recent critique, museum curators were invited to see the students’ new proposals and explore the spatial visualizations of the museum through interactive 3D models, AR, VR, as well as in-browser and mobile tools that included all sorts of additional media and information.