Posts tagged with "Augmented Reality":

Placeholder Alt Text

Retail is getting reimagined with augmented reality

Retail is dead. Long live retail. With the ubiquity of online shopping, brick-and-mortar retail has become more competitive. Good deals and low prices aren't enough to draw customers into stores anymore; today's customers are looking for experiences, according to developers and retail prognosticators. Canadian outdoor goods retailer Mountain Equipment Co-op (MEC) has teamed up with creative technology from Finger Food to offer an in-store—or in-home—experience that bridges the digital and the physical: augmented reality tent shopping.  "Retail has gone through significant disruption and it's only going to get faster," said David Labistour, CEO of MEC. The outdoor company sees this disruption as a unique opportunity for growth. MEC offers more tents than can fit in their stores. Rather than hanging excess tents from the ceiling, MEC asked Finger Food to develop an application that would allow customers using a phone, tablet, or AR/VR goggles to see and explore a full-scale, fully rendered (inside and out) 3D version of every single tent that MEC sells. What's special about this particular use of the increasingly common AR technology is the unprecedented level of detail Finger Food was able to achieve.   Finger Food create their ultra-realistic 3d models in an enormous room they call the holodeck — named after the high-tech virtual reality rooms in Star Trek. Using a proprietary photogrammetry rig and accompanying software, the company can take thousands of photos of any object to capture its geometries and textures at extremely high resolution. In addition to the realism, Finger Food's solution is distinguished by its speed—scanning an object requires less than an hour, compared to days that could be spent creating a 3D model from scratch—and the system has proven its capability to capture objects of any scale, from a pair of sunglasses to a semi-truck.  Their work for MEC isn't Finger Food's first foray into the retail space. The group has previously worked with Lowe's home improvement stores to develop two augmented reality apps. One lets users see what products look like in their homes—everything from accent tile to a six-burner stove—and easily make a purchase afterward. The other app guides users through Lowe's 1000,000-square-foot stores to find the exact products they're looking for; it also notifies employees when an item needs restocking.  Customers can currently use the AR application at MEC's flagship Toronto store, with a larger rollout planned. "We believe the future of the customer experience will be significantly changed through the integration of technology," said Labistour. If these technologies prove successful, the retail experience and store design could be changed as well. In a future with augmented reality and next-day delivery, less space may be needed in stores as fewer items would be kept on display and in stock.
Placeholder Alt Text

Apple and New Museum team up for choreographed urban AR art tours

New York's New Museum, which has already launched a fair share of tech-forward initiatives like net-art preservation and theorization platform Rhizome and NEW INC, has teamed up with Apple over the past year-and-a-half to create a new augmented reality (AR) program called [AR]T. New Museum director Lisa Phillips and artistic director Massimiliano Gioni selected artists Nick Cave, Nathalie Djurberg and Hans Berg, Cao Fei, John Giorno, Carsten Höller, and Pipilotti Rist to create new installations that display the artistic potential of AR and help advance the museum’s own mixed-reality strategy. Each of the artists will create interactive AR artworks that can be viewed via iPhones with the [AR]T app on “choreographed” street tours that will begin in a limited number of Apple stores across six cities. Users will be able to capture the mixed reality installations in photos and video through their phones. Additionally, Nick Cave has created an AR installation titled Amass that can be viewed in any Apple store, and the company has worked with artist and educator Sarah Rothberg to help develop programs to initiate beginners into developing their own AR experiences. This announcement comes on the heels of much industry AR and VR speculation regarding Apple, in part encouraged by recent hires from the gaming industry, like that of Xbox co-creator Nat Brown, previously a VR engineer at Valve. While some artists, institutions, and architects have embraced AR and VR, many remain skeptical of the technology, and not just on artistic grounds. Writing in the Observer, journalist Helen Holmes wonders if “Apple wants the public to engage with their augmented reality lab because they want to learn as much about their consumers as possible, including and especially how we express ourselves creatively when given new tools.” The [AR]T app will drop on August 10th in the following cities: New York, San Francisco, London, Paris, Hong Kong, and Tokyo
Placeholder Alt Text

The Nonument database is saving forgotten 20th-century buildings

Nonument is committed to not only recording but celebrating the 20th century’s most important non-monuments. Founded in 2011, the multidisciplinary artist and research collective has amassed a record of built spaces that stand, if barely; forgotten by time through decay, technological or political changes, Nonument is preserving them even as they fall out of favor in a changing 21st-century society.  Rather than present “a glorified collection of obscurities” or focus purely on architectural styles, founders Neja Tomšič and Martin Bricelj Baraga seek to develop a deeper understanding of public space and art, and how politics shape these spaces in our world today. In partnership with Mapping & Archiving Public Spaces (MAPS) project, the collective has a goal of cataloging more than 120 forgotten sites around the globe and bring them back into the public eye.  Created by the Museum of Transitory Art, MAPS shares many of the goals of Nonument: its mission “aims to identify, map and archive public spaces, architecture, and monuments which are part of our cultural heritage, but are not yet identified as such.” And that’s where Nonument began. NONUMENT01 was a response to the demolition of a Brutalist icon, the McKeldin Fountain in Baltimore. A decision made with limited public engagement or input, the fountain had been an important gathering point for protestors and creatives, and the visual centerpiece of McKeldin Square. Upon its removal in 2016, Lisa Moren, a professor of visual arts, enacted the first art installation of Nonument, debuting an augmented reality app that allowed users to recreate the fountain on their screens, and interact with memories like protest signs and koi fish to discover their stories. The app and its launch event at the site continued the legacy of the lost monument and its role within the city, setting a precedent for Nonuments of the future. The database is just one component of Nonument. Case studies on architectural theory and live art, and performance events like Moren’s, are also an integral part of the collective’s mission, making it more than just an encyclopedia of degrading buildings. While the act of listing the monuments breathes back a certain degree of life, critical discourse and real-life opportunities for interaction with the listed structures completes a circle of study and renegotiation with the space they occupy—aligning with the overarching goals of the group.  From nuclear power plants in Austria to stone sculptures in Serbia, the database is set to become a comprehensive collection and research resource for the 20th century, and continue to unearth the stories that matter, and rewrite the rules for sustainable management of our cultural heritage. 
Placeholder Alt Text

BIG and UNStudio launch augmented reality design software

The phrase “bring a project to life” is thrown around casually by creative types of all creeds, from industrial designers to conceptual painters—people whose daily lives involve intense engagement with communication tools that allow the ideas in their heads to exist in the physical world. Emerging technologies from 3D software to VR goggles have revolutionized the way that clients can experience a designer’s vision, and now, Hyperform, a new collaborative and data-driven design tool, allows the design industry to literally immerse themselves—digitally—within a working project, blown up via augmented reality technology to 1:1 scale. Hyperform comes from a Bjarke Ingels Group (BIG) collaboration with Squint/Opera, a creative digital studio, and UNStudio. These big-name studios believe that their immersive software will enable designers to make the best decisions for the project and the client much faster, as the interactive elements are closer to complete project visualization than anything we’ve seen yet. Jan Bunge, managing director at Squint/Opera, said, “Hyperform marks the first time we can feel and sense a spatial condition before it gets built.” Client and designer can walk around a project, experiencing its massing, spatial qualities, and materiality, and simply use hand gestures to edit, delete, and alter this type of digital file in real time before it’s too late or too expensive to make a change. In a concept film, the Hyperform user is depicted as a disembodied hand, the viewer’s own, pushing at virtual buttons suspended in space and scrolling through horizontal libraries of architectural drawings, 3D models, and plans. Selecting a model and blowing it up with verbal cues to immersive size, the user shares it with a life-size colleague who materializes in a pixelated form before our eyes, calling in and “ready to join the meeting.” BIG has debuted this new tool at its curated exhibition, FORMGIVING An Architectural Future History from Big Bang to Singularity, at the Danish Architecture Center in Copenhagen. Amid the exhibition of 71 BIG projects currently on the drawing boards, representing the firm’s active proposals for the future, Hyperform exists towards the end of the exhibition's “timeline”—near the top of the staircase near “singularity”—as the software represents the step beyond perceiving mere reality, going beyond into creating new realities—digital ones.
Placeholder Alt Text

Artist Simon Denny explores the effects of digital and physical mining

Artist Simon Denny is digging into data as a landscape, unearthing the possibilities of extracting material both physical and informational in Mine, a show at the Australian museum Mona. The show has found itself a fitting setting at Tasmania’s iconoclastic museum, the privately-run brainchild of entrepreneur David Walsh, that is itself a winding maze of darkened corridors partially carved into the Triassic sandstone of the Berriedale peninsula. The mine-shaft feeling is only increased by the museum's new Nonda Katsalidis and Falk Peuser–designed underground extension—a level of subterranean spaces connected by circular stone tunnels with metal ribs that they’re calling Siloam.  Denny, whose previous work has fixated on cryptocurrency, the dissolution of borders, and other complications of our increasingly computerized world, works in the space between the two meanings of mine—both the extraction of physical material, like rare earth metals and lithium necessary for our devices, and the data mining and mining for bitcoins which has increasingly clear environmental impact in the form of outsize carbon emissions and land use. Mine looks at technological shifts and their impact on the IRL environment, as well as the entanglements of colonization and economics that have propelled resource extraction and all its environmental impacts. Instead of a canary in a coal mine, Mine will feature an augmented reality version of the nearly-extinct King Island brown thornbill, which researchers have recently discovered in Tasmania outside of its normal habitat, living inside a 3D version of a patent diagram of an Amazon warehouse cage that’s in actuality been designed for the company’s notoriously overworked and underpaid human workers. On the walls, the bird is overlaid onto pages of the patent and the AR bird, whose habitat has been all but destroyed by industry, flits throughout the exhibition on visitors’ phones or on "The O,” the museum’s unusual electronic guide. The exhibition has been designed as a trade show-cum-board game, where various devices that extract resources from the land and from human labor are displayed on a giant version of Squatter, a classic Monopoly-style Australian board game about raising sheep. Another board game, called "Extractor," will act as exhibition catalogue. Figurative work from other artists who investigate work and automation will be displayed, including Li Lao’s 2012 Consumption, which recalls the artist’s own experience working for the manufacturer Foxconn, and Patricia Piccinini’s 2002 Game Boys Advanced. The curators Jarrod Rawlins and Emma Pike hope, taken together, these sculptures will evince a “metaphorical workforce.” Mine is on view through April 13, 2020.
Placeholder Alt Text

Architect creates app to change how exhibitions are designed

For all the advances in technology over the past decade, the experience of curating and viewing museum shows has remained relatively unchanged. Even though digital archive systems exist and have certainly helped bring old institutions into the present, they have relatively little influence over the ways museum shows are designed and shared. The normal practice is more or less “old school” and even borderline “dysfunctional,” said Bika Rebek, principal of the New York and Vienna–based firm Some Place Studio. In fact, a survey she conducted early on found that many of the different software suites that museum professionals were using were major time sinks for their jobs. Fifty percent said they felt they were “wasting time” trying to fill in data or prepare presentations for design teams. To Rebek, this is very much an architectural problem, or at least a problem architects can solve. She has been working over the past two years, supported by NEW INC and the Knight Foundation, to develop Tools for Show, an interactive web-based application for designing and exploring exhibitions at various scales—from the level of a vitrine to a multi-floor museum. Leveraging her experiences as an architect, 3D graphics expert, and exhibition designer (she’s worked on major shows for the Met and Met Breuer, including the OMA-led design for the 2016 Costume Institute exhibition Manus x Machina), Rebek began developing a web-based application to enable exhibition designers and curators to collaborate, and to empower new ways of engaging with cultural material for users anywhere. Currently, institutions use many different gallery tools, she explained, which don’t necessarily interact and don’t usually let curators think spatially in a straightforward way. Tools for Show allows users to import all sorts of information and metadata from existing collection management software (or enter it anew), which is attached to artworks stored in a library that can then be dragged and dropped into a 3D environment at scale. Paintings and simple 3D shapes are automatically generated, though, for more complex forms where the image projected onto a form of a similar footprint isn’t enough, users could create their own models.  For example, to produce the New Museum’s 2017 show Trigger: Gender as a Tool and a Weapon, Rebek rendered the space and included many of the basic furnishings unique to the museum. For other projects, like a test case with the Louvre's sculptures, she found free-to-use models and 3D scans online. Users can drag these objects across the 3D environments and access in-depth information about them with just a click. With quick visual results and Google Docs-style automatic updates for collaboration, Tools for Show could help not just replace more cumbersome content management systems, but endless emails too. Rebek sees Tools for Show as having many potential uses. It can be used to produce shows, allowing curators to collaboratively and easily design and re-design their exhibitions, and, after the show comes down it can serve as an archive. It can also be its own presentation system—not only allowing “visitors” from across the globe to see shows they might otherwise be unable to see, but also creating new interactive exhibitions or even just vitrines, something she’s been testing out with Miami’s Vizcaya Museum and Gardens. More than just making work easier for curators and designers, Tools for Show could possibly give a degree of curatorial power and play over to a broader audience. “[Tools for Show] could give all people the ability to curate their own show without any technical knowledge,” she explained. And, after all, you can't move around archival materials IRL, so why not on an iPad? While some of the curator-focused features of Tools for Show are in the testing phase, institutions can already request the new display tools like those shown at Vizcaya. Rebek, as a faculty member at Columbia University's Graduate School of Architecture, Planning, and Preservation, has also worked with students to use Tools for Show in conjunction with photogrammetry techniques in an effort to develop new display methods for otherwise inaccessible parts of the Intrepid Sea, Air, and Space Museum, a history and naval and aerospace museum located in a decommissioned aircraft carrier floating in the Hudson River. At a recent critique, museum curators were invited to see the students’ new proposals and explore the spatial visualizations of the museum through interactive 3D models, AR, VR, as well as in-browser and mobile tools that included all sorts of additional media and information.

Open Call: R+D for the Built Environment Design Fellowship

R+D for the Built Environment, is sponsoring a 6-month, paid, off-site design fellowship program starting this summer. We're looking for four candidates in key R+D topic areas:
  1. Building material science
  2. 3D printing, robotics, AR/VR
  3. AI, machine learning, analytics, building intelligence
  4. Quality housing at a lower cost
  5. Building resiliency and sustainability
  6. Workplace optimization
  7. Adaptable environments
We're excited to support up-and-coming designers, engineers, researchers (and all the disciplines in between!) advance their work and provide them with a platform to share their ideas. Follow the link below for more details and instructions on how to apply. Applications are due by May 31, 2019. https://sites.google.com/view/rdbe-design-fellowship-2019/home  
Placeholder Alt Text

A French startup is using drones and AI to save the world's architectural heritage

Now active in over 30 countries around the world, French startup Iconem is working to preserve global architectural and urban heritage one photograph at a time. Leveraging complex modeling algorithms, drone technology, cloud computing, and, increasingly, artificial intelligence (AI), the firm has documented major sites like Palmyra and Leptis Magna, producing digital versions of at-risk sites at resolutions never seen, and sharing their many-terabyte models with researchers and with the public in the form of exhibitions, augmented reality experiences, and 1:1 projection installations across the globe. AN spoke with founder and CEO Yves Ubelmann, a trained architect, and CFO Etienne Tellier, who also works closely on exhibition development, about Iconem’s work, technology, and plans for the future. The Architect's Newspaper: Tell me a bit about how Iconem got started and what you do. Yves Ubelmann: I founded Iconem six years ago. At the time I was an architect working in Afghanistan, in Pakistan, in Iran, in Syria. In the field, I was seeing the disappearance of archeological sites and I was concerned by that. I wanted to find a new way to record these sites and to preserve them even if the sites themselves might disappear in the future. The idea behind Iconem was to use new technology like drones and artificial intelligence, as well as more standard digital photography, in order to create a digital copy or model of the site along with partner researchers in these different countries. AN: You mentioned drones and AI; what technology are you using? YU: We have a partnership with a lab in France, the INRIA (Institut National de Recherche en Informatique/National Institute for Research in Computer Science and Automation). They discovered an algorithm that could transform a 2D picture into a 3D point cloud, which is a projection of every pixel of the picture into space. These points in the point cloud in turn reproduce the shape and the color of the environment, the building and so on. It takes billions of points that reproduce the complexity of a place in a photorealistic manner, but because the points are so tiny and so huge a number that you cannot see the point, but you see only the shape on the building in 3D. Etienne Tellier: The generic term for the technology that converts the big datasets of pictures into 3D models is photogrammetry. YU: Which is just one process. Even still, photogrammetry was invented more than 100 years ago…Before it was a manual process and we were only able to reproduce just a part of the wall or something like that. Big data processing has led us to be able to reproduce a huge part of the real environment. It’s a very new way of doing things. Just in the last two years, we’ve become able to make a copy of an entire city—like Mosul or Aleppo—something not even possible before. We also have a platform to manage this huge amount of data and we’re working with cloud computing. In the future we want to open this platform to the public. AN: All of this technology has already grown so quickly. What do you see coming next? YU: Drone technology is becoming more and more efficient. Drones will go farther and farther, because batteries last longer, so we can imagine documenting sites that are not accessible to us, because they're in a rebel zone, for example. Cameras also continue to become better and better. Today we can produce a model with one point for one millimeter and I think in the future we will be able to have ten points for one millimeter. That will enable us to see every detail of something like small writing on a stone. ET: Another possible evolution, and we are already beginning to see this happen thanks to artificial intelligence, is automatic recognition of what is shown by a 3D model. That's something you can already have with 2D pictures. There are algorithms that can analyze a 2D picture and say, "Oh okay, this is a cat. This is a car." Soon there will probably also be the same thing for 3D models, where algorithms will be able to detect the architectural components and features of your 3D model and say, "Okay, this is a Corinthian column. This dates back to the second century BC." And one of the technologies we are working on is the technology to create beautiful images from 3D models. We’ve had difficulties to overcome because our 3D models are huge. As Yves said before, they are composed of billions of points. And for the moment there is no 3D software available on the market that makes it possible to easily manipulate a very big 3D model in order to create computer-generated videos. So what we did is we created our own tool, where we don't have to lower the quality of our 3D models. We can keep the native resolution quality photorealism of our big 3D models, and create very beautiful videos from them that can be as big as a 32K and can be projected onto very big areas. There will be big developments in this field in the future. AN: Speaking of projections, what are your approaches to making your research accessible? Once you've preserved a site, how does it become something that people can experience, whether they're specialists or the public? YU: There are two ways to open this data to the public. The first way is producing digital exhibitions that people can see, which we are currently doing today for many institutions all over the world. The other way is to give access directly to the raw data, from which you can take measurements or investigate a detail of architecture. This platform is open to specialists, to the scientific community, to academics. The first exhibition we did was with the Louvre in Paris at the Grand Palais for an exhibition called Sites Éternels [Eternal Sites] where we projection mapped a huge box, 600 square meters [6,458 square feet], with 3D video. We were able to project monuments like the Damascus Mosque or Palmyra sites and the visitors are surrounded by it at a huge scale. The idea is to reproduce landscape, monuments, at scale of one to one so the visitor feels like they’re inside the sites. AN: So you could project one to one? ET: Yes, we can project one to one. For example, in the exhibition we participated to recently, in L'Institut du monde arabe in Paris, we presented four sites: Palmyra, Aleppo, Mosul, and Leptis Magna in Libya. And often the visitor could see the sites at a one to one scale. Leptis Magna was quite spectacular because people could see the columns at their exact size. It really increased the impact and emotional effect of the exhibition. All of this is very interesting from a cultural standpoint because you can create immersive experiences where the viewer can travel through a whole city. And they can discover not only the city as a whole but also the monuments and the architectural details. They can switch between different scales—the macro scale of a city; the more micro one of the monument; and then the very micro one of a detail—seamlessly. AN: What are you working on now? ET: Recently, we participated in an exhibition that was financed by Microsoft that was held in Paris, at the Musée des Plans-Reliefs, a museum that has replicas of the most important sites in France. They're 3D architectural replicas or maquettes that can be 3 meter [apx. 10 feet] wide that were commissioned by Louis XIV and created during the 17th century because he wanted to have replicas to prepare a defense in case of an invasion. Recently, Microsoft wanted to create an exhibition using augmented reality and they proposed making an experience in this museum in Paris, focusing on the replicas of Mont-Saint-Michel, the famous site in France. We 3D scanned this replica of Mont-Saint-Michel, and also 3D scanned the actual Mont-Saint-Michel, to create an augmented reality experience in partnership with another French startup. We made very precise 3D models of both sites—the replica and the real site—and used the 3D models to create the holograms that were embedded and superimposed. Through headsets visitors would see a hologram of water going up and surrounding the replica of Mont-Saint-Michel. You could see the digital and the physical, the interplay between the two. And you could also see the site as it was hundreds of years before. It was a whole new experience relying on augmented reality and we were really happy to take part in this experience. This exhibition should travel to Seattle soon.
Placeholder Alt Text

Morpholio Trace adds augmented reality to its arsenal

This past year was a big one for Morpholio, whose app Trace was named an Apple “App of the Day” and a top app in education, design, and drawing. Now, with iOS 12’s updated augmented reality (AR) functionality, the company is pushing the iPad’s architectural sketching abilities further, and closer to (almost) real space. The app was always intended to blend the benefits of sketching and CAD, giving people the option to quickly work things out by hand and blend the benefits of paper and digital tools, but now Trace is hoping to disrupt another common practice: taping and staking off rooms. “People have been pacing out plans as long as they've been drawing them. The palpable sense of scale, dimension, and extent simply can't be communicated with stills or even animation,” said Mark Collins, Morpholio cofounder. However, now, leveraging Apple’s ARKit 2, Morpholio has developed a tool they’re calling ARSketchWalk, which allows Morpholio Trace users to project their drawings at scale onto what’s captured through an iPad’s camera, as well as to extrude walls and other shapes by touch. Collins went on to say: “The AR experience gives you a real sense of how your space will feel and lets you decide if it works for you.” Morpholio cofounder Anna Kenoff said not only has the company been seeing people putting it to the test on clear sites and even on Manhattan office floors, but also using it to design smaller components like mullions or doorknobs to get a sense of and adjust scale right at their desks or on site. “Scale is hard to perceive in drawing if you’re not a pro, or even in a 3D model, which our field has learned over the years, can be deceiving when it comes to real scale,” Kenoff explained, adding that that AR allows an easier way to grasp relationships between spaces with the added ability to see drawings come to life—a quick way to test and iterate ideas that combines the benefits of traditional sketches with the added precision of accurate scale and perspective. The beta release has also been popular with landscape architects, reported Kenoff. The app also takes advantage of another new Apple feature: the ability to experience AR environments simultaneously and from different vantages across multiple devices. “Design is one part idea and two parts convincing others that your idea is the right one,” said Kennoff. The idea, then, is letting other designers, project partners, and clients get the shared experience of moving through the speculative spaces (secondary users can use their phones, with no need for an iPad Pro). Morpholio also expanded the app’s responsiveness to new Apple Pencil gestures and Kenoff said that the new pencil design has been a boon, allowing Morpholio engineers to build software for designers to draw things “only a hand” can. While apps like Morpholio Trace still can’t entirely do away with powerful desktop applications, the increasing computing power of mobile devices paired with the ability to use them on site is offering new avenues for use in the design process and in communicating with clients. “We’re just beginning to harness that technology and see how designers might use it; these are the early phases of that exploration,” said Kenoff. She went on to say, “we see Trace as just one piece of a much larger infrastructure; designers need different tools at different phases of design, but they need different levels of precision at every phase, something Trace can offer.”
Placeholder Alt Text

Serpentine Gallery and David Adjaye put out a call for trippy AR architecture

London’s Serpentine Galleries are going high-tech for its 2019 summer installation. Together with Google Arts & Culture and Serpentine trustee David Adjaye, the arts institution is soliciting augmented reality (AR) architecture proposals to run alongside the 2019 Serpentine Pavilion this summer. Serpentine Augmented Architecture is taking entries from all over the world until February 25. According to the brief, applicants are expected to “propose imaginary city spaces and speculations on the built environment to be developed and experienced” in AR onsite at the galleries. The jury, composed of writers, curators, designers, technologists, and architects, is evaluating proposals on three different categories: How can the city be augmented or reinvented through AR? How can AR recontextualize our spatial relationships, given that the challenges and limitations of designing in the physical world are nonexistent in digital realities? Finally, each submission needs to be at least somewhat site-specific, as the Serpentine Gallery sits within an ecologically-sensitive park and welcomes up to 12 million visitors a year. Most importantly, because AR is in a relatively nascent stage, despite the tech being readily available to anyone with a smartphone, the boundaries and rules for its use have yet to be written. Any creative uses of augmented reality, including those that evolve over time, will be accepted. Entrants who are selected for the two-stage competition’s shortlist will move ahead to the second round and will be given a stipend of $1,000 to further develop their idea. After that, one winning proposal will be realized on the gallery’s grounds in July, and the winning team will receive approximately $3,800 for travel and accommodation expenses. Interested in applying? The full guidelines can be found here. The Serpentine Galleries are leaning heavily into tech this year, and Marina Abramovic will be staging her own AR installation in the main gallery space from February 19 through 24. While the architect of the 2019 Serpentine Pavilion hasn’t been announced yet, that information should become available any day now, giving applicants a better idea of what their work will be shown alongside.
Placeholder Alt Text

Augmented environments come alive in a riotous exhibition in Stockholm

The streets of Stockholm are still littered with the fading colors of campaign posters from an undecided election. From the pink backdrop of feminist candidates, to progressive reds, conservative blues, and the struggling greens, to the slippery, slick gradients of the neo-nationalists. The politics of the public sphere have been unavoidable lately in Sweden, but a couple of heavy rains are slowly turning what’s left behind back into grey pulp. Meanwhile, Value in the Virtual, an exhibit by London-based Space Popular, blossoms with a polychrome array of signs, patterns, and symbols. The show has just opened at Sweden’s national center for architecture and design, ArkDes, under new its director Kieran Long. A new feature of the museum is a smaller gallery space—Boxen, a steel box designed by local practice Dehlin Brattgård—inside one of the two main exhibition halls. It is intended for fast-paced architecture shows curated by former ArchDaily editor James Taylor-Foster, and Value in the Virtual is the first display of work by a contemporary design practice in the new setup. The Space Popular duo, Lara Lesmes and Fredrik Hellberg, have grabbed the opportunity to build what they describe as their manifesto. It consists of six "typical" Stockholm environments—a downtown apartment, a vintage-styled coworking café, an exclusive nightlife district, a banquet hall, an iconic subway station, and a royal park—retrofitted with an added layer of augmented reality. They are rendered in 1:1-scale elevations printed on tapestries and hung from scaffolding in cornered-off spaces. Each corner is a mash-up with "walls" from two or three locations. Entering the exhibition space, you are invited to take your shoes off (to experience the printed carpets on the floors), and once inside, to put on a pair of virtual reality goggles. They are a window into Voxen, a parallel version of the same gallery space produced for Sansar, a social virtual reality platform. During the press preview, an online visitor had already found his way there for a peek. The avatar, dressed in a black bodysuit and a Daft Punk motorcycle helmet, showed up out of nowhere, mumbled a distorted "nice to meet you," and soon disappeared. In this realm, you get 3-D views of some of Space Popular’s scenarios: one is a version of Stockholm where public art is updated by the minute; in another the allegorical wall mosaics of the Nobel Prize venue are adjusted to tout recent scientific achievements; and in another a selective nightclub bouncer might actually let you in after all, if you upgrade to a nicer-looking "skin." For a casual visitor to be able to follow the narratives, though, some additional guidance would be welcome. Instead, the dense 3-D visuals are trusted to speak for themselves. An argument underlying Value in the Virtual is that architects could, and should, engage with the potential digital depth of architectural surfaces, as well as what comes with it: cognitive psychology, color theory, and the techniques to manipulate it. The exhibit seems to say, "Get on with it and design your own digital tapestry for the scaffolding that is, for lack of a better word, the real world. And if you don’t," the argument is, "somebody else will." And when augmented reality hardware moves from "a drawer on your face" to a casual pair of glasses, virtual dressing of space, and the market for virtual real estate, could become a big deal. “What is the role of the architect and designer in this transition?” a section of the wall text asks, when designers are “not bound by a requirement to shelter" nor “to the various physical limitations” we are used to? Either way, many architects are already engaged. Architecture school graduates, especially in the United States, are increasingly recruited by software companies before they even contemplate a career as licensed master-builders. Architecture is already part of the $137.9-billion gaming industry market, but perhaps not in familiar ways. The exhibition wants to look at what is going on while augmented reality is still in its infancy and to figure out the consequences of the tech before it is sitting on everyone’s face. Will visitors get it? Do the chosen scenarios illustrate what is actually claimed to be at stake? What is the real promise here? What is the threat? This stuff is touched upon in the exhibition brochure but not very evident in the exhibit itself. And if the argument is made more clearly in writing in Space Popular’s ten-point programmatic declaration, what does the gallery display add, except a VR demo and an institutional seal of approval? Maybe that’s enough. It’s a start to the conversation Space Popular wants us to have. Stockholm is normally very neat. Putting up posters is prohibited, and the rules are generally abided by. Political campaign material is the exception, and come election season, the city is carpeted with colorful platitudes and slogans. Public space is already a kind of virtual real estate, a moderated social medium that is regulated, traded, and hacked. It always has been, you could argue. Sure, the visual exploitation of it is augmented by new technologies, but the question, then, is if that augmentation does something that didn't happen before. The politics of augmentation are not that different to the conflicts of interest, power plays, and negotiations of spatial politics already being debated. A peephole into another world can make you forget where you are standing at the moment. Sometimes, a mirror would perhaps be more useful. Space Popular: Value in the Virtual, curated by James Taylor-Foster, is on view at ArkDes, Sweden's national center for architecture and design in Stockholm through November 18, 2018.
Placeholder Alt Text

Apple's latest announcement makes augmented reality easier for architects

Apple has wrapped up its keynote at the 2018 Worldwide Developers Conference (WWDC) and announced several big changes coming with IOS 12 that should shake things up for architects and designers. The biggest announcements focused on VR and augmented reality (AR), as Apple unveiled ARKit 2.0. With a presentation backed by a constellation of big names such Lego, Adobe, Autodesk and more, the sequel to Apple’s original AR infrastructure promised to bring a much more cohesive AR ecosystem to consumers. For architects and interior designers, Apple’s promise of easily digitizing and later placing real-world objects in an AR environment could have a huge impact on envisioning what a space will look like. If ARKit 2.0 succeeds, it could be used to decorate a room or to put different architectural options (literally) on the table without having to build physical iterative models. A collaborative, locally-hosted AR experience was also shown off at the keynote, with two players using their iPads to knock down wooden towers Angry Birds-style. One complaint about the original ARKit was the fragmented ecosystem and inability to interact with AR objects across apps. For IOS 12, Apple has partnered with Pixar to develop a new proprietary file format for 3D models, USDZ (Universal Scene Description Zip). USDZ files should be openable across all IOS devices, including Macs, and Adobe is promising native Creative Cloud support. The introduction of a streamlined system for sharing and examining 3D objects in the real world, and to create persistent AR experiences in specific locations, likely means enhanced functionality for apps like Morpholio once the new IOS rolls out. For those looking for more practical applications of the technology, Apple is also expanding its list of native AR apps. Coders who have measuring tools on the App Store likely won’t be happy with Measure, Apple’s in-house AR solution that allows users to snap a picture of reference objects and then tap to measure anything else. Apple's senior vice president of Software Engineering Craig Federighi took to the stage and used Measure to seamlessly calculate the length of a trunk, then his own baby photo. IOS 12 will get a public release when the latest iPhone model rolls out in September of this year.