Posts tagged with "augmented reality":

Placeholder Alt Text

Artist Artie Vierkant uses augmented reality to put his art in your hands

Brooklyn-based artist Artie Vierkant melds photography, commercial printing, and sculpture. For Vierkant, there is no real hierarchical distinction between art experienced in real life or in a photograph. In Vierkant’s post-digital world, 2-D and 3-D, image and object, original and copy, exist on a level playing field. He often takes photos of his installation, modifies them, and repurposes them as new works in and of themselves. Most recently, this has taken the form of an augmented reality (AR) app, Image Object (a term Vierkant termed in 2010 to describe work which “exist[s] somewhere between physical sculptures and altered documentation images”), released in conjunction with his exhibition Rooms greet people by name at Galerie Perrotin on the Lower East Side. Vierkant sat down with the Architect’s Newspaper to discuss the app, the boundaries between 2-D and 3-D, and what augmented reality means for the future of public space. Architect's Newspaper: Along with your exhibition Rooms greet people by name at Galerie Perrotin in New York, you’ve released an app, Image Object. Can you give us a little background on the app? Artie Vierkant: The app is functionally just a camera app. Part of the idea is for people to take photos and videos with it. For me, that’s just another extension of the work. It started because I realized augmented reality platforms are basically what I do already with the installation view editing in a very simple way. I started playing around with those tools and then realizing that if I applied the same principles as what happened in the installation views, AR could essentially create the exact same aesthetic experience, but perceptually in space, where you could wander through and around the work. It takes the reproduction in a photo on my phone or online or in a book and renders it spatially. AN: Like much of your work, this app troubles the boundary of 2-D and 3-D, perhaps taking it even further than what you've done before. If you're using the Image Object app in the exhibition, there’s a simultaneity between the arts instantiation in the gallery space and in the flat space of your screen. Artie Vierkant: I’m always quite serious about these points of intersection. AN: But, on the other hand, the app works anywhere. You can attend the exhibition without going to the gallery. Artie Vierkant: Or make your own. AN: I was reading somewhere that you said digital space was “susceptible to modification.” How might we think of physical space along these lines? One of the big questions for AR is where the boundaries between physical spatial experience and digital experience are. And if those boundaries even matter. Artie Vierkant: I think it’s becoming increasingly obvious that those distinctions and boundaries don't really matter. So many of these technologies are just prosthetics that extend our regular perceived lived reality. The idea of having two totally separate realms has been long ago debunked. We’re living in an incredibly weird time. AN: The idea of being “post-internet” presented the notion that the internet became so pervasive it just was simply the background of our world, not some special, fetishizable thing apart from it. It infiltrated existence. Now everything is passively designed to accommodate our use of the internet, to accommodate computation—possibly without even conscious thought. What sort of spaces and designs can we imagine as AR becomes increasingly high-level and increasingly entrenched? Artie Vierkant: This is what would maybe be the really interesting possibilities with AR. For example, buildings are designed knowing that a certain type of use is going to be predominant within them, but AR actually creates a situation where for the first time you could actually introduce a completely other layer. You could say that that's not totally the case because many things you could physically make, you could waste a bunch of material and resources doing it though. AN: The gallery is almost a perfect metaphor for this though because it is always claiming to be an empty, evacuated space. Artie Vierkant: The white cube will not stop trying to attest that it is a neutral zone or a completely empty space of limitless possibility. Which is obviously false. AN: AR takes that notion of emptiness and asks why bother filling it with stuff when you can fill it digitally. In this way, the gallery is the most extreme test case for the relationship between physical space and AR. Another interesting problem of AR is its relation to the private/public in space. The gallery is a space we occupy with others, but the app’s view of it is limited to our own phones. What does AR mean for public space and togetherness in physical space? Artie Vierkant: I have my own assumptions about how a lot of these technologies will play out and continue to be developed. I don't think that the more utopian options are going to play out in the short term, frankly. Clearly one of the issues that we have right now with "digital space” is its relationship to corporations. Social media space is basically just comprised of huge advertising companies. You could imagine a more egalitarian version of this under capitalism where you're both selling you're both your resources, where you're actively selling your attention and being remunerated for it. AN: Still, that remains capitalism as such, if a less extreme variety of it. Even the more radical proposals that have existed have quickly turned into commercial tools; if late capitalism is good at anything, it's rapidly subsuming anything that was initially meant to oppose it. Artie Vierkant: But, outside of an individualized experience which the market is predisposed to, you could also imagine a very strange reality that would be produced by having an augmented reality share a collectivized experience where you have a separate life over everything or different spaces where you could go into, like a white cube, and could load up some stuff that people have left there. Rooms greet people by name will be on view at Galerie Perrotin until April 8. Image Object will remain downloadable from the Apple App Store. Artie Vierkant: Rooms greet people by name Galerie Perrotin 130 Orchard Street New York, NY Through April 8    
Placeholder Alt Text

New York City will fund a virtual reality and augmented reality lab

In a bid to trump their West Coast rivals, the New York City Economic Development Corporation (NYCEDC) and the Mayor's Office of Media and Entertainment (MOME) yesterday announced that they will invest $6 million into a virtual reality (VR) and augment reality (AR) lab—the first of its kind on the East Coast. The NYCEDC and MOME are set to release a request for proposals early next year to set up and run the lab. According to NYCEDC President Maria Torres-Springer, the decision was made to make the most of the rapidly emerging VR technology scene. The lab (whose location is yet to be decided) will also be the country's first ever publicly funded institution of its kind. Earlier this year, Antonio Pacheco, the west editor of The Architect's Newspaper, noted the rise of VR being used by architecture firms such as Gensler, NBBJ, and Özel in California. The lab in New York will serve as an incubator for VR/AR businesses. In a press release, the NYCEDC and MOME said that the city's VR/AR industry has seen more than $50 million invested in the past year along with a 125 percent increase in job demand. In terms of operation, the lab will support the growth of VR/AR companies by providing space, infrastructure, and resources. Additionally, it will serve as a place for people involved in the industry to gather. “The timing could not be better. The talent in New York City, along with its anchor industries, place this city in a unique position to propel its sprouting VR/AR sector from early disruption to everyday, cross-sector application. Convening the technologists, academics, storytellers and start-up veterans that New York City hosts and attracts will create an invigorating boost to VR/AR’s momentum as we head into 2017,” said Adaora Udoji, managing director of The Boshia Group and adjunct professor of storytelling, New York University.
Placeholder Alt Text

New wearable technology lets you feel data on your skin

Imagine you are feeling your way through a maze blindfolded, informed only by what you can feel. Now imagine the maze isn’t real, but actually a digital construction. This Matrix-like scenario was recently used by the Interactive Architecture Lab at the Bartlett School of Architecture in London to test its latest innovation, Sarotis: Wearable technology that functions as a second-skin to help heighten the user’s awareness of his or her surroundings. Combining soft robotics with depth sensors, the prosthetic technology can work in tandem with Google’s Project Tango technology. This computer vision technology uses a combination of depth sensing, motion tracking, and area learning technologies to allow a smartphone or device to “see” its environment in 3-D. Sarotis then translates this data into a tactile response, inflating or exerting pressure to guide the user. It is made from a soft fabric that wraps around the body like a second skin. Sarotis: Experimental Prosthesis from Interactive Architecture Lab on Vimeo. As an example, the lab blindfolded participants and had them navigate an empty room with an invisible path drawn in it using Project Tango. Wearing the Sarotis technology, participants were able to navigate the maze fairly easily. Then, participants had to draw the maze and were able to reproduce its form as well. Currently, the most obvious application of Sarotis is to aid those who are visually impaired. Sarotis could guide someone using gentle pressure to steer them away from curbs, walls, or other obstacles. However, the Interactive Architecture Lab predicts that 3-D vision technologies like Project Tango will be available on the majority of mobile devices and that by 2020 70 percent of the world’s population will have access to 3-D scanning, depth tracking, motion awareness, etc. on their smartphones. Combined, the Sarotis and 3-D computer vision could radically expand the possibilities in terms of navigating, gaming, and safety and other popular applications. Sarotis: Wearable Futures from Interactive Architecture Lab on Vimeo.
Placeholder Alt Text

Wander though a lush, pre-apocalyptic virtual garden at the Seattle Art Museum

While most people blunder around city streets tethered to their phones, one artist is offering an augmented reality (AR) experience that explores the effects of climate change on native flora. Tamiko Thiel's new installation at the Seattle Art Museum Olympic Sculpture Park, Gardens of the Anthropocenetransforms the park into an AR landscape that depicts Seattle under the influence of climate change. In the coming years, Seattle is expected to have a dry climate similar to Eastern Washington or Northern California. The Creators Project reports Thiel consulted with scientists at the University of Washington Center for Creative Conservation to learn how the park's plants could adapt to rising temperatures, drought, and extreme weather events. When flesh-and-blood visitors stroll through the park, they walk through a fantasy garden of engorged Alexandrium catenella (called Alexandrium giganteus) and massive, mutant bullwhip kelp sits over Elliot Street, the main thoroughfare adjacent to the park. Other plants gobble food of the anthropocene—electromagnetic radiation from smartphones, nutrients in buildings—or blossom in the liminal space between land and sea. Theil used 3ds Max and Blender to craft the plants, combining images of leaf textures to enhance her creations. Theil then used Layar platform to implement Gardens of the Anthropocene IRL—the Seattle Art Museum set up booths at the park's entrances with instructions on how to see the piece. The extraordinary imagery belies a foreboding conclusion, born out by Theil's conversation with the Center: The earth is warming at an ever-increasing pace, and disturbing weather changes are years, not decades, away. To steel yourself for the impending human-engendered apocalypse, check out the project page on Theil's website for more images of landscape in the Anthropocene. The exhibit runs through September 30, 2016.
Placeholder Alt Text

This Japanese collective of designers and programmers is reimagining art, architecture, interactivity

The internet is hotly debating whether 2016 is the year virtual reality goes mainstream: New headset displays like Oculus Rift and Google Cardboard may bring immersive digital experiences to the masses. However, Japanese art collective teamLab has long been pursuing its own complex approach to digital environments. Unlike a headset, its interactive installations don’t privilege a single optical perspective. Many encourage you to move around or through them; your actions can even make them morph, mutate, and evolve. TeamLab’s show, located in the heart of Silicon Valley, gathers 20 of its works into a 20,000-square-foot digital interactive art extravaganza.

Founded in 2001, teamLab began with web design but now boasts a 400-member-strong group—which includes animators, programmers, architects, and more— who collaborate on everything from office interiors to software. The work’s whimsy belies its technical complexity: For instance, the LED cloud of Crystal Universe changes its dazzling colors and patterns based on visitors’ movements and a custom app. “The viewer is an active participant and ultimately becomes a part of the artwork,” said teamLab.

The collective also cites what it calls “Ultra Subjective Space” as inspiration: There’s no privileged position to experience its art. The piece Flowers and People, Cannot be Controlled but Live Together–A Whole Year per Hour stretches across the walls of an entire room. Like in Crystal Universe, sensors and software constantly react to your movements, guaranteeing one display—seen from afar or up close—will never be exactly repeated.

Some exhibits go even further. Sketch Town and Sketch Town Papercraft let children color in a paper outline of a car that is then scanned, converted into 3-D, and inserted into a dynamic animated city. There, the children can move their digital cars—and other children’s as well—with their hands. They can even print a paper version of their car and fold it into a toy. “This project aims to encourage children to become aware of what the child next to them is drawing or creating,” said teamLab. “They may come to think it would be more fun to build something together.”

Paradoxically, it’s the art’s shared physical spaces that make teamLab’s virtual realities more social. “As people become part of the same space and artwork, the relationship between self and others changes.”