Posts tagged with "Artificial Intelligence":

Placeholder Alt Text

ACADIA 2019 showcased the state of digital design

The presentations and activities at this year’s ACADIA (Association for Computer Aided Design in Architecture) conference gave attendees a glimpse of potentially disruptive technologies and workflows for computational architectural production. The conference was held this year in Austin from October 12 through 14 and was organized by The University of Texas School of Architecture faculty members Kory Bieg, Danelle Briscoe, and Clay Odom. The organizers collected papers, workshops, and projects addressing the theme of “Ubiquity and Autonomy” in computation. Contributors reflected on the state of architectural production, in which digital tools and methodologies developed in the boutique, specialized settings at the fringes of the profession a generation ago have now become commonplace in architectural offices—while at the same time, new forms of specialist computational practices are emerging which may themselves soon become mainstream. While each participant grappled to position themselves in the cyclical and ever-advancing framework of technological inheritance and transference, the most encouraging efforts can be described in three categories: Expansions, subversions, and wholesale disruptions of the computational status quo. The expansionists claimed new technological territories, enlisting emerging and peripheral technologies to their purposes. The subvertors sampled the work and scrambled the workflows of their predecessors, configuring novel material applications in the process. Disruptors actively sought to break the techno-positivist cycle, questioning the assumptions, ethics, and values of previous generations to leverage computational design and digital processes to advance pressing and prescient political, economic, and ecological agendas. Expansionists appropriated bleeding-edge technologies, or those newly introduced to the discipline, to stake new terrain in design and construction. The conference was the first of its kind to host a dedicated session on the use of Generative Adversarial Networks (GANs) in design. This machine-learning system pits two forms of artificial intelligence against each other—one AI acts as the creative “artist,” generating all the possible solutions to a given task, while the other acts as the “critic,” selectively editing and curating the most appropriate responses. After training the networks on archives of architectural imagery, panelists put the GANs to work on evaluative and generative design tasks, alternately generating passably authentic floor plans, building envelopes, and reconstructed streetscapes. The workshop sessions, hosted by a suite of computational research teams from several architectural offices, demonstrated possibilities for adopting emerging technologies with familiar platforms, adopting and adapting tools like Fologram and Hololens to more familiar software platforms and fabrication methods. The subvertors, familiar with the expected uses and applications of given tools, would offer intentionally contradictory alternatives, short-circuiting established workflows and celebrating the unintended consequences of digitally enhanced platforms. A project from MIT researchers Lavender Tessmer, Yijiang Huang, and Caitlin Mueller entitled “Additive Casting of Mass-Customizable Brick” is a good example of the subvertors’ approach to interrogating workflows, enlisting precision-equipment for low-fidelity effect. As the current state-of-the-art in custom concrete formwork employs costly and time-consuming workflows to task CNC routers or robotic arms with milling, the MIT project is a critical alternative. Instead of shaping the mold, the project mobilizes the mold, achieving a wide variety of sculptural concrete “bricks” using standard cylindrical forms wielded by a robotic arm, while leveraging the ability of liquid concrete to self-level. The molds are shifted to preset positions while the concrete sets, allowing the sequential states of self-leveled concrete to intersect in complex geometries. The process is surprisingly delightful to watch, as the robot controls seven molds simultaneously like a drummer with a drumkit. The unexpected combination of high- and low-tech recalibrates possibilities for the robotic craft. Other researchers swapped out expected materials to produce unexpected results. Vasily Sitnikov (KTH) and Peter Eigenraam (TU Delft) teamed with BuroHappold to produce IceFormwork, a project that uses milled blocks of ice as the unlikely forms for casting high-performance fiber-reinforced concrete. Ice, the team argued, is a preferred, environmentally neutral alternative to industry-standard EPS foam molds, which produce a vast amount of waste. Ice molds, the team demonstrated, are easy enough to make (with some help from a reliable water source and a repurposed refrigerated ISO container). Airborne particles suspended by the ice-milling process are harmless water vapor, unlike the dangerous foam dust requiring ventilation equipment and other protective measures. When it comes to de-molding, the ice can simply be left outside to melt. While these investigations showcased new ways to hack the assembly process of cast building elements, their choice of concrete as a material contradicted a growing consensus in the panels; that designers should actively seek alternatives to the glut of concrete in the building industry, given the high ecological cost and high carbon footprint of concrete manufacturing in the context of an accelerating global sand shortage. Daniela Mitterberger and Tiziano Derme (MAEID/University of Innsbruck) offered one of the more radical alternatives with their project “Soil 3D Printing.” The team is using hydrogels—non-toxic, biodegradable adhesives—as binding agents injected into loose soil, to form alien landscapes of networked, earthen structures that portend a near-future where biocompatible, organic additive manufacturing processes restructure geotechnical landscapes and planetary geology. The provocations of the disruptors—who radically repurpose computational tools beyond perceived disciplinary constraints—raised profound questions about the potential for design technologies to enable and enact larger societal transformations by lining up global supply chains, material economies, and non-human constituencies squarely in their sights. Jose Sanchez (Plethora Project/Bloom Games/USC), in the presentation he gave while accepting the Innovative Research Award, presented his work leveraging computation and game design to critically examine and transform economic and ecologic realities. Sanchez has developed a series of game environments which force players to navigate wicked problems in contemporary cities, to confront the complexities, contradictions, and paradoxes of urbanization, logistics, and manufacturing. Sanchez described the continued focus in his work on efforts to "optimize for the many"—as opposed to the few—in a period of increased economic inequality, re-assessing the predominant use of digital technologies over the past few decades to enable complex mass-customized assemblies. Sanchez, in his own work, and in projects like Bloom with Alisa Andrasek (Biothing/Bloom Games/RMIT), has been exploring the potential of digital technologies to disrupt mass-production models through high-volume production of serialized and standardized “discrete” architectural components. In a similar vein, Gilles Retsin (UCL/Bartlett) argued for a reconsideration of the labor practices and digital economies enmeshed in, and implicitly supported by,  a building industry that has not yet come to terms with automation. By focusing on the ability of digital tools to combat material waste, Retsin argued, a generation of digitally savvy architects have ignored the potential of automation to address wasted labor. Through speculative research and small projects, Retsin is hoping to disrupt the building industry, increasing the capacity of architects to design and implement new platforms for project delivery which can combat exploitative practices. As expansionists pointed out where to look for the next big advancement, subvertors demonstrated how existing tools could be used differently. Disruptors were some of the few to ask—and answer—why. Stephen Mueller is a founding partner of AGENCY and a Research Assistant Professor at Texas Tech University College of Architecture in El Paso.
Placeholder Alt Text

Roaming robot dogs could streamline jobsite documentation

Reality capture has revolutionized construction by increasing job site efficiency and safety and allowing for quick responses to design and building challenges. However, save for the use of drones, often operated by humans, on-the-ground monitoring has required the relatively traditional (and labor-intensive) task of walking around and taking photos and collecting data to feed into software. HoloBuilder, whose software helps builders document and analyze their underway projects, has partnered with the robotics firm Boston Dynamics to create a semi-autonomous solution to document under-construction projects. Using Boston Dynamics’ Spot, a dog-like robot that regularly goes semi-viral for its aerial acrobatics (and its more sinister uses, such as being put to work by the Massachusetts State Police), contractors can capture 360-degree overviews of their work and track changes throughout the build process. Controlled by the SpotWalk app, the robot is first semi-manually trained to walk its reality capture route via a user’s phone. Then, Spot learns to repeat the route on its own, avoiding obstacles and documenting the site consistently and regularly, creating documentation of the project over time. Contractor Hensel Phelps has been testing out Spot on the $1.2 billion San Francisco International Airport Terminal 1 project. A Spot unit walks through the site capturing imagery, which is then fed into HoloBuilder’s machine learning-powered SiteAI, which provides automated construction tracking and other data. Documenting construction sites currently is a tedious task that takes away time from project staff that could otherwise focus on other aspects of construction, safety, and design. It can only be done with relatively limited regularity because of the demands. With Spot, project managers predict that they could capture updates of their sites as frequently as twice a day with all the 360 imagery being automatically organized and analyzed. Because of Spot's greater consistency against humans, the photos are also more useful as tools and the collected data is more actionable due to its regularity.
Placeholder Alt Text

Jenny Sabin's installation for Microsoft responds to occupants' emotions

At Microsoft’s Redmond, Washington, campus, architect Jenny Sabin has helped realize a large-scale installation powered by artificial intelligence. Suspended from three points within an atrium, the two-story, 1,800-pound sculpture is a compressive mesh of 895 3D-printed nodes connected by fiberglass rods and arranged in hexagons along with fabric knit from photoluminescent yarn. Created as part of Microsoft’s artist-in-residence program, the project is named Ada, after Ada Lovelace, the English mathematician whose work on the analytical engine laid the groundwork for the invention of computer programming as we know it. Anonymized information is collected from microphones and cameras throughout the building. An AI platform designed by a team led by researcher Daniel McDuff processes this data to try to accurately sense people’s emotions based on visual and sonic cues, like facial movements and voice tone. This data is then synthesized and run through algorithms that create a shifting color gradient that Ada produces from an array of LEDs, fiber optics, and par (can) lights. “To my knowledge, this installation is the first architectural structure to be driven by artificial intelligence in real-time,” Sabin, Microsoft’s current artist in residence, told the company’s AI Blog. Microsoft touts Ada as an example of “embedded intelligence,” AI that’s built-in and responsive to our real-world environment. McDuff also hopes that his emotion tracking technology, as dystopian as it might sound, could have solutions in healthcare or other caregiving situations. (Microsoft employees are able to opt-out of individuated tracking and they assure that all identifying info is removed from the media collected).  Ada is part of a broader push to embed sensing and artificial intelligence into the built environment by Microsoft and many other companies, as well as artistic pavilions that grapple with the future of AI in our built world, like Refik Anadol's recent project at New York's ARTECHOUSE.
Placeholder Alt Text

Aesthetic of Prosthetics compares computer-enhanced design practices

How has contemporary architecture and culture been shaped by our access to digital tools, technologies, and computational devices? This was the central question of Aesthetics of Prosthetics, the Pratt Institute Department of Architecture’s first alumni-run exhibition curated by recent alumni and current students Ceren Arslan, Alican Taylan, Can Imamoglu and Irmak Ciftci. The exhibition, which closed last week, took place at Siegel Gallery in Brooklyn. The curatorial team, made up of current students and recent alumni, staged an open call for submissions that addressed the ubiquity of “prosthetic intelligence” in how we interact with and design the built environment. “We define prosthetic intelligence as any device or tool that enhances our mental environment as opposed to our physical environment," read the curatorial statement. "Here is the simplest everyday example: When at a restaurant with friends, you reach out to your smartphone to do an online search for a reference to further the conversation, you use prosthetic intelligence." As none of the works shown have actually been built, the pieces experimented with the possibilities for representation and fabrication that “prosthetic intelligence” allows. The selected submissions used a range of technologies and methods including photography, digital collage, AI technology, digital modeling, and virtual reality The abundant access to data and its role in shaping architecture and aesthetics was a pervasive theme among the show's participants. Ceren Arslan's Los Angeles, for instance, used photo collage and editing to compile internet-sourced images that create an imaginary, yet believable streetscape. Others speculated about data visualization when drawings are increasingly expected to be read by not only humans, but machines and AI intelligence, as in Brandon Wetzel's deep data drawing.

"The work shown at the exhibition, rather than serving as a speculative criticism pointing out towards a techno-fetishist paradigm, tries to act as recording device to capture a moment in architectural discourse. Both the excitement and skepticism around the presented methodologies are due to the fact that they are yet to come to fruition as built projects," said the curators in a statement. 

Placeholder Alt Text

URBAN-X 6 showcases new tech solutions at A/D/O

This past Thursday, URBAN-X hosted its sixth demo day in Brooklyn at A/D/O, where startups that were showing what Micah Kotch, the startup accelerator's managing director, called “novel solutions to urban life.” URBAN-X, which is organized by MINI, A/D/O’s founder, in partnership with the venture firm Urban Us, began incubating urban-focused startups back in 2016. Previous iterations have seen everything from electric vehicle companies to waste management startups, and for this session, the brief was intentionally broad, said Kotch. On display was everything from machine-learning solutions to building energy management to apps that let people buy leftover prepared food from fast-casual restaurants and cafes to prevent food waste and generate some extra revenue.  Pi-Lit showed off a networked solution to highway and infrastructural safety. Many lives are lost each year as people sit after accidents, or as construction workers operate in dangerous work zones. The California-based company has developed a smart solution of mesh-networked lighting that can be deployed by first responders or work on existing work zone infrastructure. In addition, they’ve developed an array of sensors that can be affixed to bridges, roads, and temporary barriers—which founder Jim Selevan says are prone to impact but without transportation departments being aware, leading to unknown compromises that can cause accidents later on. Sensors could also let relevant parties know if a bridge is vibrating too much, or when roads begin freezing and warnings need to be put out, providing users with “real-time ground truth.” 3AM also presented their plans for using mesh networks, with a focus on safety, as their program relies on drones and portable trackers to help support operational awareness for firefighters. More whimsically, Hubbster showcased their solution—already deployed in Paris and Copenhagen—to support urban play: basically an app-based rental system for basketballs, croquet set, and everything in between, which would deploy from small, battery-powered smart lockboxes. Less glamorously but quite critically, Varuna is trying to make a change in the old-fashioned U.S. water infrastructure system, which exposes as much as 63 percent of the country to unsafe water and largely relies on manual testing, even for federally mandated across-the-board chlorine monitoring. They hope that by introducing AI-equipped sensors to utility systems, U.S. water can be delivered more safely, efficiently, and cheaply, addressing "operational inefficiencies in water supply, outdated tools, and a lack of visibility.” Also working with utilities was the Houston-based Evolve Energy, whose AI behavioral classification solution, currently available in parts of Texas, allows electricity to be bought at wholesale prices at the times of day when it is cheapest, for the comfort and needs individual users value most. For example, a home can pre-cool with cheap electricity and then turn off when prices surge. Variable rates, a la airline tickets, were a common theme—for example, Food for All, an app that is designed to reduce food waste and create extra revenue for fast-casual restaurants, offers flexible pricing for customers to pick up food that might otherwise be tossed. Most relevant to architects, perhaps, were Cove.Tool’s recent updates. The startup reports that they’ve made big strides on their cloud-based app that helps architect’s create efficient buildings. Reportedly cutting down energy grading from tens of hours to mere minutes, the app can now simulate the effects of sunlight—through various types glass—on utility usage, among many other new micro-climatic simulation features.
Placeholder Alt Text

ARTECHOUSE's Chelsea Market space will let visitors experience architectural hallucinations

ARTECHOUSE, a technology-focused art exhibition platform conceived in 2015 by Sandro Kereselidze and Tati Pastukhova, has been presenting digitally inspired art in Washington D.C. and Miami. Now they’re coming to New York, “a clear next step for [their] mission,” with an inaugural exhibition by Refik Anadol. The Istanbul-born, Los Angeles-based Anadol is known for his light and projection installations that often have an architectural component, such as the recent animation projected on the facade of the Frank Gehry-designed Walt Disney Concert Hall. For ARTECHOUSE in New York (also Anadol’s first large exhibition in New York),  he’ll be presenting Machine Hallucination. The installation will create what he calls “architectural hallucinations” that are derived from millions of images processed by artificial intelligence and machine learning algorithms. “With Refik, it’s been a collaborative process for over a year and a half, bringing a new commission, Machine Hallucination to life,” explained Kereselidze and Pastukhova. “We have worked closely with Refik to develop the concept for this exciting new work, thinking carefully about how to most effectively utilize and explore our Chelsea Market space.” ARTECHOUSE is especially suited to visualizing Refik’s “data universe” with a floor-to-ceiling, room-wrapping 16K laser projector that the creators claim features “the largest seamless megapixel count in the world,” along with 32-channel sound from L-ISA. The more than 3 million photos, representing numerous architectural styles and movements, will be made to expose (or generate) latent connections between these representations of architectural history, generating “hallucinations” that challenge our notions of space and how we experience it—and providing insight into how machines might experience space themselves. It makes us consider what happens when architecture becomes information. Of the work, Anadol said, “By employing machine intelligence to help narrate the hybrid relationship between architecture and our perception of time and space, Machine Hallucination offers the audience a glimpse into the future of architecture itself.” Machine Hallucination will inhabit the new 6,000-square-foot ARTECHOUSE space in Chelsea Market, located in an over-century-old former boiler room which features exposed brick walls and a refurbished terracotta ceiling, which according to its creators, “supplies each artist with a unique canvas and the ability to drive narratives connecting the old and new.” ARTECHOUSE will be opening to the public early next month.
Placeholder Alt Text

How can new technologies make construction safer?

Construction remains one of the most dangerous careers in the United States. To stop accidents before they happen, construction companies are turning to emerging technologies to improve workplace safety—from virtual reality, drone photography, IoT-connected tools, and machine learning. That said, some solutions come with the looming specter of workplace surveillance in the name of safety, with all of the Black Mirror-esque possibilities. The Boston-based construction company Suffolk has turned to artificial intelligence to try and make construction safer. Suffolk has been collaborating with computer vision company Smartvid.io to create a digital watchdog of sorts that uses a deep-learning algorithm and workplace images to flag dangerous situations and workers engaging in hazardous behavior, like failing to wear safety equipment or working too close to machinery. Suffolk’s even managed to get some of their smaller competitors to join them in data sharing, a mutually beneficial arrangement since machine learning systems require so much example data; something that's harder for smaller operations to gather. Suffolk hopes to use this decade’s worth of aggregated information, as well as scheduling data, reports, and info from IoT sensors to create predictive algorithms that will help prevent injuries and accidents before they happen and increase productivity. Newer startups are also entering the AEC AI fray, including three supported by URBAN-X. The bi-coastal Versatile Natures is billing itself as the "world's first onsite data-provider," aiming to transform construction sites with sensors that allow managers to proactively make decisions. Buildstream is embedding equipment and construction machinery to make them communicative, and, by focusing on people instead, Contextere is claiming that their use of the IoT will connect different members of the workforce. At the Florida-based firm Haskell, instead of just using surveillance on the job site, they’re addressing the problem before construction workers even get into the field. While videos and quizzes are one way to train employees, Haskell saw the potential for interactive technologies to really boost employee training in a safe context, using virtual reality. In the search for VR systems that might suit their needs, Haskell discovered no extant solutions were well-suited to the particulars of construction. Along with their venture capital spinoff, Dysruptek, they partnered with software engineering and game design students at Kennesaw State University in Georgia to develop the Hazard Elimination/Risk Oversight program, or HERO, relying on software like Revit and Unity. The video game-like program places users into a job site, derived from images taken by drone and 360-degree cameras at a Florida wastewater treatment plant that Haskell built, and evaluates a trainee’s performance and ability to follow safety protocols in an ever-changing environment. At the Skanska USA, where 360-degree photography, laser scanning, drones, and even virtual reality are becoming increasingly commonplace, employees are realizing the potentials of these new technologies not just for improved efficiency and accuracy in design and construction, but for overall job site safety. Albert Zulps, Skanska’s Regional Director, Virtual Design and Construction, says that the tech goes beyond BIM and design uses, and actively helps avoid accidents. “Having models and being able to plan virtually and communicate is really important,” Zulps explained, noting that in AEC industries, BIM and models are now pretty much universally trusted, but the increased accuracy of capture technologies is making them even more accurate—adapting them to not just predictions, but the realities of the site. “For safety, you can use those models to really clearly plan your daily tasks. You build virtually before you actually build, and then foresee some of the things you might not have if you didn't have that luxury.” Like Suffolk, Skanska has partnered with Smartvid.io to help them process data. As technology continues to evolve, the ever-growing construction industry will hopefully be not just more cost-efficient, but safer overall.

Open Call: R+D for the Built Environment Design Fellowship

R+D for the Built Environment, is sponsoring a 6-month, paid, off-site design fellowship program starting this summer. We're looking for four candidates in key R+D topic areas:
  1. Building material science
  2. 3D printing, robotics, AR/VR
  3. AI, machine learning, analytics, building intelligence
  4. Quality housing at a lower cost
  5. Building resiliency and sustainability
  6. Workplace optimization
  7. Adaptable environments
We're excited to support up-and-coming designers, engineers, researchers (and all the disciplines in between!) advance their work and provide them with a platform to share their ideas. Follow the link below for more details and instructions on how to apply. Applications are due by May 31, 2019. https://sites.google.com/view/rdbe-design-fellowship-2019/home  
Placeholder Alt Text

A French startup is using drones and AI to save the world's architectural heritage

Now active in over 30 countries around the world, French startup Iconem is working to preserve global architectural and urban heritage one photograph at a time. Leveraging complex modeling algorithms, drone technology, cloud computing, and, increasingly, artificial intelligence (AI), the firm has documented major sites like Palmyra and Leptis Magna, producing digital versions of at-risk sites at resolutions never seen, and sharing their many-terabyte models with researchers and with the public in the form of exhibitions, augmented reality experiences, and 1:1 projection installations across the globe. AN spoke with founder and CEO Yves Ubelmann, a trained architect, and CFO Etienne Tellier, who also works closely on exhibition development, about Iconem’s work, technology, and plans for the future. The Architect's Newspaper: Tell me a bit about how Iconem got started and what you do. Yves Ubelmann: I founded Iconem six years ago. At the time I was an architect working in Afghanistan, in Pakistan, in Iran, in Syria. In the field, I was seeing the disappearance of archeological sites and I was concerned by that. I wanted to find a new way to record these sites and to preserve them even if the sites themselves might disappear in the future. The idea behind Iconem was to use new technology like drones and artificial intelligence, as well as more standard digital photography, in order to create a digital copy or model of the site along with partner researchers in these different countries. AN: You mentioned drones and AI; what technology are you using? YU: We have a partnership with a lab in France, the INRIA (Institut National de Recherche en Informatique/National Institute for Research in Computer Science and Automation). They discovered an algorithm that could transform a 2D picture into a 3D point cloud, which is a projection of every pixel of the picture into space. These points in the point cloud in turn reproduce the shape and the color of the environment, the building and so on. It takes billions of points that reproduce the complexity of a place in a photorealistic manner, but because the points are so tiny and so huge a number that you cannot see the point, but you see only the shape on the building in 3D. Etienne Tellier: The generic term for the technology that converts the big datasets of pictures into 3D models is photogrammetry. YU: Which is just one process. Even still, photogrammetry was invented more than 100 years ago…Before it was a manual process and we were only able to reproduce just a part of the wall or something like that. Big data processing has led us to be able to reproduce a huge part of the real environment. It’s a very new way of doing things. Just in the last two years, we’ve become able to make a copy of an entire city—like Mosul or Aleppo—something not even possible before. We also have a platform to manage this huge amount of data and we’re working with cloud computing. In the future we want to open this platform to the public. AN: All of this technology has already grown so quickly. What do you see coming next? YU: Drone technology is becoming more and more efficient. Drones will go farther and farther, because batteries last longer, so we can imagine documenting sites that are not accessible to us, because they're in a rebel zone, for example. Cameras also continue to become better and better. Today we can produce a model with one point for one millimeter and I think in the future we will be able to have ten points for one millimeter. That will enable us to see every detail of something like small writing on a stone. ET: Another possible evolution, and we are already beginning to see this happen thanks to artificial intelligence, is automatic recognition of what is shown by a 3D model. That's something you can already have with 2D pictures. There are algorithms that can analyze a 2D picture and say, "Oh okay, this is a cat. This is a car." Soon there will probably also be the same thing for 3D models, where algorithms will be able to detect the architectural components and features of your 3D model and say, "Okay, this is a Corinthian column. This dates back to the second century BC." And one of the technologies we are working on is the technology to create beautiful images from 3D models. We’ve had difficulties to overcome because our 3D models are huge. As Yves said before, they are composed of billions of points. And for the moment there is no 3D software available on the market that makes it possible to easily manipulate a very big 3D model in order to create computer-generated videos. So what we did is we created our own tool, where we don't have to lower the quality of our 3D models. We can keep the native resolution quality photorealism of our big 3D models, and create very beautiful videos from them that can be as big as a 32K and can be projected onto very big areas. There will be big developments in this field in the future. AN: Speaking of projections, what are your approaches to making your research accessible? Once you've preserved a site, how does it become something that people can experience, whether they're specialists or the public? YU: There are two ways to open this data to the public. The first way is producing digital exhibitions that people can see, which we are currently doing today for many institutions all over the world. The other way is to give access directly to the raw data, from which you can take measurements or investigate a detail of architecture. This platform is open to specialists, to the scientific community, to academics. The first exhibition we did was with the Louvre in Paris at the Grand Palais for an exhibition called Sites Éternels [Eternal Sites] where we projection mapped a huge box, 600 square meters [6,458 square feet], with 3D video. We were able to project monuments like the Damascus Mosque or Palmyra sites and the visitors are surrounded by it at a huge scale. The idea is to reproduce landscape, monuments, at scale of one to one so the visitor feels like they’re inside the sites. AN: So you could project one to one? ET: Yes, we can project one to one. For example, in the exhibition we participated to recently, in L'Institut du monde arabe in Paris, we presented four sites: Palmyra, Aleppo, Mosul, and Leptis Magna in Libya. And often the visitor could see the sites at a one to one scale. Leptis Magna was quite spectacular because people could see the columns at their exact size. It really increased the impact and emotional effect of the exhibition. All of this is very interesting from a cultural standpoint because you can create immersive experiences where the viewer can travel through a whole city. And they can discover not only the city as a whole but also the monuments and the architectural details. They can switch between different scales—the macro scale of a city; the more micro one of the monument; and then the very micro one of a detail—seamlessly. AN: What are you working on now? ET: Recently, we participated in an exhibition that was financed by Microsoft that was held in Paris, at the Musée des Plans-Reliefs, a museum that has replicas of the most important sites in France. They're 3D architectural replicas or maquettes that can be 3 meter [apx. 10 feet] wide that were commissioned by Louis XIV and created during the 17th century because he wanted to have replicas to prepare a defense in case of an invasion. Recently, Microsoft wanted to create an exhibition using augmented reality and they proposed making an experience in this museum in Paris, focusing on the replicas of Mont-Saint-Michel, the famous site in France. We 3D scanned this replica of Mont-Saint-Michel, and also 3D scanned the actual Mont-Saint-Michel, to create an augmented reality experience in partnership with another French startup. We made very precise 3D models of both sites—the replica and the real site—and used the 3D models to create the holograms that were embedded and superimposed. Through headsets visitors would see a hologram of water going up and surrounding the replica of Mont-Saint-Michel. You could see the digital and the physical, the interplay between the two. And you could also see the site as it was hundreds of years before. It was a whole new experience relying on augmented reality and we were really happy to take part in this experience. This exhibition should travel to Seattle soon.
Placeholder Alt Text

Amazon is bringing its seamless automated grocery store to New York

Imagine a world where artificial intelligence tracks your every movement. A world where buildings have minds of their own, learning your behaviors, and collecting data from you as you come and go. While existing technology has not yet reached sci-fi levels, a visit to an Amazon Go grocery store can offer you a peek into this possible future of retail design. This week Amazon announced its plans to open a new store in New York, the first of its kind on the East Coast, before opening nearly 3,000 more nationwide by 2021. The company has already built out six Amazon Go stores in Seattle, Chicago, and San Francisco. The cutting-edge stores, as shown within its first locations, are characterized by visual simplicity, clarity, and hyper-functionality. Through the stores' structural elements, including minimalistic facades, geometric configurations, and exposed raw materials, such as wood veneer and polished concrete, the interiors assume an industrial feel. They feature muted colors and black merchandise racks that give the stores a clean appearance as well. Meanwhile, ceiling cameras monitor shoppers as they wander through the aisles. The stores are unique in that they are void of cashiers, cash registers, and self-service checkout stands. Customers only need to walk in, take what they need, and leave. As they swing through the turnstiles on their way out, Amazon automatically bills their credit cards. Within minutes, a receipt is sent to the Amazon app, giving customers a summary of what they bought, what they paid, and the exact amount of time they spent in the store. The stores, which depend on highly sophisticated image recognition software and artificial intelligence to function, are expected to drastically transform the retail experience in unexpected ways. Amazon began working on retail stores five years ago with the goal of eliminating consumer criticisms and complaints, such as struggling to find products and waiting in long lines. Since the first Amazon Go store opened last January in Seattle, it has received tremendous praise and success. According to CNN, highly automated retail stores like Amazon Go are expected to become the norm within as little as 10 to 15 years. Research has shown that up to 7.5 million retail jobs are at risk of automation in the next decade, which will save retailers money on labor, as well as boost profits, but obviously cost retail workers their livelihood. Automated stores can facilitate the ordering and restocking process as cameras and AI track inventory in real-time. The removal of cash registers provides more space for inventory. Customer data can also be uploaded to the servers of each building, where retailers can present them with personalized discounts, offers, and other incentives. While Amazon has confirmed plans to open an Amazon Go store in New York, its location has yet to be determined.
Placeholder Alt Text

MIT announces $1 billion campus focused on AI advancement

The encroach of self-driving cars, acrobatic terminators, and decades of media hysterics over the destructive potential of artificial intelligence (AI) have brought questions of robot ethics into the public consciousness. Now, MIT has leaped into the fray and will tackle those issues head-on with the announcement of a new school devoted solely to the study of the opportunities and challenges that the advancement of AI will bring. The new MIT Stephen A. Schwarzman College of Computing, eponymously named after the Blackstone CEO who gave a $350 million foundational grant to launch the endeavor, will be getting its own new headquarters building on the MIT campus. While a large gift, the final cost of establishing the new school has been estimated at a whopping $1 billion, and MIT has reportedly already raised another $300 million for the initiative and is actively fundraising to close the gap. “As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” wrote MIT president L. Rafael Reif in the announcement. “In keeping with the scope of this challenge, we are reshaping MIT. “The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.” As Reif told the New York Times, the goal is to “un-silo” previously self-contained academic disciplines and create a center where biologists, physicists, historians, and any other discipline can research the integration of AI and data science into their field. Rather than offering a standard double-major, the new school will instead integrate computer science into the core of every course offered there. The college will also host forums and advance policy recommendations on the developing field of AI ethics. The Stephen A. Schwarzman College of Computing is set to open in September 2019, and the new building is expected to be complete in 2022. No architect has been announced yet; AN will update this article when more information is available.
Placeholder Alt Text

New startup is building a stock exchange for high-value art

For centuries, art collecting and art brokering were comfortably ensconced in the exclusive domain of the super-rich. Now, things could be changing. A start-up called Masterworks is trying to build a stock exchange for high-value art. This summer, Masterworks acquired Claude Monet’s Coup de Vent at an auction for $6.3 million and plans to launch a public offering with individual shares valued at $20. Unfortunately, your fractional ownership won’t allow you to take the painting home and hang it over your mantelpiece, but it will allow you to participate in a highly lucrative and historically-tested investment vehicle that has never been previously available to the masses. According to Masterworks’ website, masterpiece paintings have outpaced growth in the leading U.S. stock exchange index by nearly 300 percent in the last 20 years. Current sales of expensive art are limited by liquidity and transaction protocol. There simply aren’t that many people willing to fork out a hundred million dollars for a Picasso on any given day. By creating an affordable entry point, Masterworks also hopes to help art reach a wider audience while giving the general public more agency in selecting and determining the true value of cultural artifacts. Masterworks art transactions will be recorded through an artificial intelligence (AI) platform that creates transparency and immutability. Ownership can be clearly tracked using smart contracts and digital currencies, allowing for instant transfers across geographic borders without the involvement of banks and auction houses that traditionally charge high fees. New AI-based art platforms like KODAKone, for example, allow artists to register their artwork digitally before entering the art market, providing security to all present and future stakeholders. Ironically, your digital kitten art may one day be more easily authenticated than Leonardo da Vinci’s Salvador Mundi, which recently sold at auction for $450 million.  Plus, telling your friends down at the pub that you own a Picasso could be, well…priceless.