Posts tagged with "augmented reality":

Apple’s latest announcement makes augmented reality easier for architects

Apple has wrapped up its keynote at the 2018 Worldwide Developers Conference (WWDC) and announced several big changes coming with IOS 12 that should shake things up for architects and designers. The biggest announcements focused on VR and augmented reality (AR), as Apple unveiled ARKit 2.0. With a presentation backed by a constellation of big names such Lego, Adobe, Autodesk and more, the sequel to Apple’s original AR infrastructure promised to bring a much more cohesive AR ecosystem to consumers. For architects and interior designers, Apple’s promise of easily digitizing and later placing real-world objects in an AR environment could have a huge impact on envisioning what a space will look like. If ARKit 2.0 succeeds, it could be used to decorate a room or to put different architectural options (literally) on the table without having to build physical iterative models. A collaborative, locally-hosted AR experience was also shown off at the keynote, with two players using their iPads to knock down wooden towers Angry Birds-style. One complaint about the original ARKit was the fragmented ecosystem and inability to interact with AR objects across apps. For IOS 12, Apple has partnered with Pixar to develop a new proprietary file format for 3D models, USDZ (Universal Scene Description Zip). USDZ files should be openable across all IOS devices, including Macs, and Adobe is promising native Creative Cloud support. The introduction of a streamlined system for sharing and examining 3D objects in the real world, and to create persistent AR experiences in specific locations, likely means enhanced functionality for apps like Morpholio once the new IOS rolls out. For those looking for more practical applications of the technology, Apple is also expanding its list of native AR apps. Coders who have measuring tools on the App Store likely won’t be happy with Measure, Apple’s in-house AR solution that allows users to snap a picture of reference objects and then tap to measure anything else. Apple's senior vice president of Software Engineering Craig Federighi took to the stage and used Measure to seamlessly calculate the length of a trunk, then his own baby photo. IOS 12 will get a public release when the latest iPhone model rolls out in September of this year.

For Suffolk Construction, new technologies must deliver value, not just change for change’s sake

In his panel presentation, “Enhanced Realities and Immersive Experiences,” at the upcoming Tech+ conference on May 22 in New York City, co-presenter Chris Mayer, chief innovation officer at Suffolk Construction, will discuss how innovative technologies like AR and VR are changing the industry. But chasing the latest “shiny, new object” isn’t what the talk—or even the Tech+ expo—is all about. “It’s not a lack of available technology solutions that could potentially be deployed,” he said. “It’s the idea about how do you focus on delivering value, not just delivering change, and how do you make sure that communication and collaboration are effective in supporting the organization?” Mayer has worked at Suffolk Construction for the past three years, applying what he learned during his 30-year tenure in print media at The Boston Globe to effectively integrate people, process, and technology. In an industry that experienced significant disruption, Mayer says the challenges and solutions employed, along with "focusing on not just bright, shiny objects and technology, but how to create value out of that and the necessary steps of engaging people in the organization, and putting processes in place, were all potentially applicable” to the construction industry where he landed.

Creating value across the board

Data drives innovation, and value creation must remain front and center of the process. Otherwise, organizations risk going out of business if they focus solely on technology for technology’s sake, he says. “Part of the way we want to make sure we’re driving that [value] consistently in the innovation that we’re supporting is, ultimately, 'the juice is worth the squeeze.' You want to find something that’s going to matter.” And what counts isn’t just making incremental improvements to specific functions within the design and construction industry, which Mayer observes is highly distributed. Rather, it’s about taking a holistic view and rethinking the entire process end-to-end, he says. “I think we are at a point in time where looking both at opportunities vertically—which is traditionally where I think people would focus on productivity gains—but also looking horizontally at the entire value chain of the construction process from the initial touch point with the owner, through the design, through the financing stages, through the planning and optimization stages, through the execution and construction control periods into the warranty and the quality control process,” he explained. “The idea about how to scale innovation is a big opportunity for us in the industry.”

The right tool for the job

To that end, Suffolk Construction has created “smart labs” internally that serve as sort of operations control towers where technologies can be invented, tested, implemented, and scaled. Utilizing a series of interconnected screens on what Suffolk calls "huddle walls," the team at Suffolk can simultaneously view project information and financial reports or engage in lean strategies and design planning throughout the lifecycle of a project. The labs also feature a virtual reality "cave" with head-mounted displays in which projects can be viewed individually or as a group. Naturally, Mayer doesn’t see immersive technology as a replacement for existing communication tools, but rather as an alternative. “When I look at virtual reality or augmented reality… I see those as additional options to add value to the conversation,” he said. “Because some information is going to be most effectively delivered in sort of static form on an iPad, but some of it is going to be much more effective if you put on a headset and engage with it.” Whatever is most effective—whether it’s a 3-D model, a 2-D model, or shop drawings—is the tool that should be used, he adds. At the end of the day, that’s what adds the most value.

Immersive technology may be architecture’s best tool for communication

This year’s Tech+ conference—an upcoming and groundbreaking event showcasing technological innovators in the AEC industry taking place on May 22 in New York City—will feature pioneering speakers that are rethinking existing technological paradigms. Among them is Iffat Mai, practice application development leader for Perkins + Will, who will be co-presenting a discussion about enhanced realities and immersive experiences. As a self-described technology geek, Mai is excited about the fact that the design and construction industry, which has traditionally lagged behind the times in terms of adopting new technology, is finally showing signs of receptivity. “What I’ve seen is a shift in some people’s attitude, of designers and project teams, who are very open-minded about accepting these new technologies and integrating them into their workflow and process,” she said. Mai notes that a number of software companies are making virtual reality (VR) and augmented reality (AR) platforms more compatible with existing design tools that allows for greater integration and efficiency. “I’m really happy to see that’s happening in all different levels in our industry.”

It’s all about communication

Mai’s enthusiasm for change stems from her belief that these new tools are improving communication and client engagement—an assessment that’s been tested in practice at Perkins+Will and the results of which she’ll share during her presentation at Tech+. “I think VR/AR is the ideal communication tool for the AEC industry,” Mai said. “As architects, communication of design is the bread and butter of our business.” Noting that many clients aren’t particularly adept at visualization, Mai suggests that 3-D technology can help them better understand not only how a design looks, but also gain a better sense of scale and how the space will actually feel. Oftentimes, clients look at drawings and say they understand them, but are surprised when a space is built because they don’t conceptualize the same way design practitioners do. Mixed reality solves the problem in many ways. “We’ve been implementing all these new technologies into our everyday design process and really looking to engage our stakeholders and our clients, and offer them the opportunity to be fully engaged in the design process,” Mai explained. “It’s not just giving them nice little drawings; we really put them into an immersive environment and encourage them to evaluate things by really understanding what the design is about so that, in the end, I think that the clients are a lot more comfortable and happy with the final product.”

Overcoming barriers to innovation

As a result, Mai says VR and AR technologies are streamlining the design and review process, saving both time and money. With the cost of hardware and software dropping, she suggests the barrier to entry will be lowered, especially to smaller firms that currently may not be able to afford them. Ultimately, wide-scale adoption of mixed reality technology boils down to two things, according to Mai: fear of change, and a company-wide commitment to innovation. “If you can get over the fear of changing and have kind of long-term sight of the future and not be afraid of changing, that’s a critical component of innovation,” she said. “And then your company leaders have to be really promoting company-wide innovation, to have people just think out of the box and looking for new ways of doing things in every aspect of the company.” [vimeo 261011445 w=640 h=360] TECH+ Expo from Architect's Newspaper on Vimeo.

Artist Artie Vierkant uses augmented reality to put his art in your hands

Brooklyn-based artist Artie Vierkant melds photography, commercial printing, and sculpture. For Vierkant, there is no real hierarchical distinction between art experienced in real life or in a photograph. In Vierkant’s post-digital world, 2-D and 3-D, image and object, original and copy, exist on a level playing field. He often takes photos of his installation, modifies them, and repurposes them as new works in and of themselves. Most recently, this has taken the form of an augmented reality (AR) app, Image Object (a term Vierkant termed in 2010 to describe work which “exist[s] somewhere between physical sculptures and altered documentation images”), released in conjunction with his exhibition Rooms greet people by name at Galerie Perrotin on the Lower East Side. Vierkant sat down with the Architect’s Newspaper to discuss the app, the boundaries between 2-D and 3-D, and what augmented reality means for the future of public space. https://www.instagram.com/p/Bfq5LJuhF_9/?taken-by=avierkant Architect's Newspaper: Along with your exhibition Rooms greet people by name at Galerie Perrotin in New York, you’ve released an app, Image Object. Can you give us a little background on the app? Artie Vierkant: The app is functionally just a camera app. Part of the idea is for people to take photos and videos with it. For me, that’s just another extension of the work. It started because I realized augmented reality platforms are basically what I do already with the installation view editing in a very simple way. I started playing around with those tools and then realizing that if I applied the same principles as what happened in the installation views, AR could essentially create the exact same aesthetic experience, but perceptually in space, where you could wander through and around the work. It takes the reproduction in a photo on my phone or online or in a book and renders it spatially. AN: Like much of your work, this app troubles the boundary of 2-D and 3-D, perhaps taking it even further than what you've done before. If you're using the Image Object app in the exhibition, there’s a simultaneity between the arts instantiation in the gallery space and in the flat space of your screen. Artie Vierkant: I’m always quite serious about these points of intersection. AN: But, on the other hand, the app works anywhere. You can attend the exhibition without going to the gallery. Artie Vierkant: Or make your own. AN: I was reading somewhere that you said digital space was “susceptible to modification.” How might we think of physical space along these lines? One of the big questions for AR is where the boundaries between physical spatial experience and digital experience are. And if those boundaries even matter. Artie Vierkant: I think it’s becoming increasingly obvious that those distinctions and boundaries don't really matter. So many of these technologies are just prosthetics that extend our regular perceived lived reality. The idea of having two totally separate realms has been long ago debunked. We’re living in an incredibly weird time. AN: The idea of being “post-internet” presented the notion that the internet became so pervasive it just was simply the background of our world, not some special, fetishizable thing apart from it. It infiltrated existence. Now everything is passively designed to accommodate our use of the internet, to accommodate computation—possibly without even conscious thought. What sort of spaces and designs can we imagine as AR becomes increasingly high-level and increasingly entrenched? Artie Vierkant: This is what would maybe be the really interesting possibilities with AR. For example, buildings are designed knowing that a certain type of use is going to be predominant within them, but AR actually creates a situation where for the first time you could actually introduce a completely other layer. You could say that that's not totally the case because many things you could physically make, you could waste a bunch of material and resources doing it though. AN: The gallery is almost a perfect metaphor for this though because it is always claiming to be an empty, evacuated space. Artie Vierkant: The white cube will not stop trying to attest that it is a neutral zone or a completely empty space of limitless possibility. Which is obviously false. AN: AR takes that notion of emptiness and asks why bother filling it with stuff when you can fill it digitally. In this way, the gallery is the most extreme test case for the relationship between physical space and AR. Another interesting problem of AR is its relation to the private/public in space. The gallery is a space we occupy with others, but the app’s view of it is limited to our own phones. What does AR mean for public space and togetherness in physical space? Artie Vierkant: I have my own assumptions about how a lot of these technologies will play out and continue to be developed. I don't think that the more utopian options are going to play out in the short term, frankly. Clearly one of the issues that we have right now with "digital space” is its relationship to corporations. Social media space is basically just comprised of huge advertising companies. You could imagine a more egalitarian version of this under capitalism where you're both selling you're both your resources, where you're actively selling your attention and being remunerated for it. AN: Still, that remains capitalism as such, if a less extreme variety of it. Even the more radical proposals that have existed have quickly turned into commercial tools; if late capitalism is good at anything, it's rapidly subsuming anything that was initially meant to oppose it. Artie Vierkant: But, outside of an individualized experience which the market is predisposed to, you could also imagine a very strange reality that would be produced by having an augmented reality share a collectivized experience where you have a separate life over everything or different spaces where you could go into, like a white cube, and could load up some stuff that people have left there. Rooms greet people by name will be on view at Galerie Perrotin until April 8. Image Object will remain downloadable from the Apple App Store. Artie Vierkant: Rooms greet people by name Galerie Perrotin 130 Orchard Street New York, NY Through April 8    

New York City will fund a virtual reality and augmented reality lab

In a bid to trump their West Coast rivals, the New York City Economic Development Corporation (NYCEDC) and the Mayor's Office of Media and Entertainment (MOME) yesterday announced that they will invest $6 million into a virtual reality (VR) and augment reality (AR) lab—the first of its kind on the East Coast. The NYCEDC and MOME are set to release a request for proposals early next year to set up and run the lab. According to NYCEDC President Maria Torres-Springer, the decision was made to make the most of the rapidly emerging VR technology scene. The lab (whose location is yet to be decided) will also be the country's first ever publicly funded institution of its kind. Earlier this year, Antonio Pacheco, the west editor of The Architect's Newspaper, noted the rise of VR being used by architecture firms such as Gensler, NBBJ, and Özel in California. The lab in New York will serve as an incubator for VR/AR businesses. In a press release, the NYCEDC and MOME said that the city's VR/AR industry has seen more than $50 million invested in the past year along with a 125 percent increase in job demand. In terms of operation, the lab will support the growth of VR/AR companies by providing space, infrastructure, and resources. Additionally, it will serve as a place for people involved in the industry to gather. “The timing could not be better. The talent in New York City, along with its anchor industries, place this city in a unique position to propel its sprouting VR/AR sector from early disruption to everyday, cross-sector application. Convening the technologists, academics, storytellers and start-up veterans that New York City hosts and attracts will create an invigorating boost to VR/AR’s momentum as we head into 2017,” said Adaora Udoji, managing director of The Boshia Group and adjunct professor of storytelling, New York University.

New wearable technology lets you feel data on your skin

Imagine you are feeling your way through a maze blindfolded, informed only by what you can feel. Now imagine the maze isn’t real, but actually a digital construction. This Matrix-like scenario was recently used by the Interactive Architecture Lab at the Bartlett School of Architecture in London to test its latest innovation, Sarotis: Wearable technology that functions as a second-skin to help heighten the user’s awareness of his or her surroundings. Combining soft robotics with depth sensors, the prosthetic technology can work in tandem with Google’s Project Tango technology. This computer vision technology uses a combination of depth sensing, motion tracking, and area learning technologies to allow a smartphone or device to “see” its environment in 3-D. Sarotis then translates this data into a tactile response, inflating or exerting pressure to guide the user. It is made from a soft fabric that wraps around the body like a second skin. Sarotis: Experimental Prosthesis from Interactive Architecture Lab on Vimeo. As an example, the lab blindfolded participants and had them navigate an empty room with an invisible path drawn in it using Project Tango. Wearing the Sarotis technology, participants were able to navigate the maze fairly easily. Then, participants had to draw the maze and were able to reproduce its form as well. Currently, the most obvious application of Sarotis is to aid those who are visually impaired. Sarotis could guide someone using gentle pressure to steer them away from curbs, walls, or other obstacles. However, the Interactive Architecture Lab predicts that 3-D vision technologies like Project Tango will be available on the majority of mobile devices and that by 2020 70 percent of the world’s population will have access to 3-D scanning, depth tracking, motion awareness, etc. on their smartphones. Combined, the Sarotis and 3-D computer vision could radically expand the possibilities in terms of navigating, gaming, and safety and other popular applications. Sarotis: Wearable Futures from Interactive Architecture Lab on Vimeo.

Wander though a lush, pre-apocalyptic virtual garden at the Seattle Art Museum

While most people blunder around city streets tethered to their phones, one artist is offering an augmented reality (AR) experience that explores the effects of climate change on native flora. Tamiko Thiel's new installation at the Seattle Art Museum Olympic Sculpture Park, Gardens of the Anthropocenetransforms the park into an AR landscape that depicts Seattle under the influence of climate change. In the coming years, Seattle is expected to have a dry climate similar to Eastern Washington or Northern California. The Creators Project reports Thiel consulted with scientists at the University of Washington Center for Creative Conservation to learn how the park's plants could adapt to rising temperatures, drought, and extreme weather events. When flesh-and-blood visitors stroll through the park, they walk through a fantasy garden of engorged Alexandrium catenella (called Alexandrium giganteus) and massive, mutant bullwhip kelp sits over Elliot Street, the main thoroughfare adjacent to the park. Other plants gobble food of the anthropocene—electromagnetic radiation from smartphones, nutrients in buildings—or blossom in the liminal space between land and sea. Theil used 3ds Max and Blender to craft the plants, combining images of leaf textures to enhance her creations. Theil then used Layar platform to implement Gardens of the Anthropocene IRL—the Seattle Art Museum set up booths at the park's entrances with instructions on how to see the piece. The extraordinary imagery belies a foreboding conclusion, born out by Theil's conversation with the Center: The earth is warming at an ever-increasing pace, and disturbing weather changes are years, not decades, away. To steel yourself for the impending human-engendered apocalypse, check out the project page on Theil's website for more images of landscape in the Anthropocene. The exhibit runs through September 30, 2016.

This Japanese collective of designers and programmers is reimagining art, architecture, interactivity

The internet is hotly debating whether 2016 is the year virtual reality goes mainstream: New headset displays like Oculus Rift and Google Cardboard may bring immersive digital experiences to the masses. However, Japanese art collective teamLab has long been pursuing its own complex approach to digital environments. Unlike a headset, its interactive installations don’t privilege a single optical perspective. Many encourage you to move around or through them; your actions can even make them morph, mutate, and evolve. TeamLab’s show, located in the heart of Silicon Valley, gathers 20 of its works into a 20,000-square-foot digital interactive art extravaganza.

Founded in 2001, teamLab began with web design but now boasts a 400-member-strong group—which includes animators, programmers, architects, and more— who collaborate on everything from office interiors to software. The work’s whimsy belies its technical complexity: For instance, the LED cloud of Crystal Universe changes its dazzling colors and patterns based on visitors’ movements and a custom app. “The viewer is an active participant and ultimately becomes a part of the artwork,” said teamLab.

The collective also cites what it calls “Ultra Subjective Space” as inspiration: There’s no privileged position to experience its art. The piece Flowers and People, Cannot be Controlled but Live Together–A Whole Year per Hour stretches across the walls of an entire room. Like in Crystal Universe, sensors and software constantly react to your movements, guaranteeing one display—seen from afar or up close—will never be exactly repeated.

Some exhibits go even further. Sketch Town and Sketch Town Papercraft let children color in a paper outline of a car that is then scanned, converted into 3-D, and inserted into a dynamic animated city. There, the children can move their digital cars—and other children’s as well—with their hands. They can even print a paper version of their car and fold it into a toy. “This project aims to encourage children to become aware of what the child next to them is drawing or creating,” said teamLab. “They may come to think it would be more fun to build something together.”

Paradoxically, it’s the art’s shared physical spaces that make teamLab’s virtual realities more social. “As people become part of the same space and artwork, the relationship between self and others changes.”