ARTECHOUSE, a technology-focused art exhibition platform conceived in 2015 by Sandro Kereselidze and Tati Pastukhova, has been presenting digitally inspired art in Washington D.C. and Miami. Now they’re coming to New York, “a clear next step for [their] mission,” with an inaugural exhibition by Refik Anadol. The Istanbul-born, Los Angeles-based Anadol is known for his light and projection installations that often have an architectural component, such as the recent animation projected on the facade of the Frank Gehry-designed Walt Disney Concert Hall. For ARTECHOUSE in New York (also Anadol’s first large exhibition in New York), he’ll be presenting Machine Hallucination. The installation will create what he calls “architectural hallucinations” that are derived from millions of images processed by artificial intelligence and machine learning algorithms. “With Refik, it’s been a collaborative process for over a year and a half, bringing a new commission, Machine Hallucination to life,” explained Kereselidze and Pastukhova. “We have worked closely with Refik to develop the concept for this exciting new work, thinking carefully about how to most effectively utilize and explore our Chelsea Market space.” ARTECHOUSE is especially suited to visualizing Refik’s “data universe” with a floor-to-ceiling, room-wrapping 16K laser projector that the creators claim features “the largest seamless megapixel count in the world,” along with 32-channel sound from L-ISA. The more than 3 million photos, representing numerous architectural styles and movements, will be made to expose (or generate) latent connections between these representations of architectural history, generating “hallucinations” that challenge our notions of space and how we experience it—and providing insight into how machines might experience space themselves. It makes us consider what happens when architecture becomes information. Of the work, Anadol said, “By employing machine intelligence to help narrate the hybrid relationship between architecture and our perception of time and space, Machine Hallucination offers the audience a glimpse into the future of architecture itself.” Machine Hallucination will inhabit the new 6,000-square-foot ARTECHOUSE space in Chelsea Market, located in an over-century-old former boiler room which features exposed brick walls and a refurbished terracotta ceiling, which according to its creators, “supplies each artist with a unique canvas and the ability to drive narratives connecting the old and new.” ARTECHOUSE will be opening to the public early next month.
Posts tagged with "Artificial Intelligence":
Construction remains one of the most dangerous careers in the United States. To stop accidents before they happen, construction companies are turning to emerging technologies to improve workplace safety—from virtual reality, drone photography, IoT-connected tools, and machine learning. That said, some solutions come with the looming specter of workplace surveillance in the name of safety, with all of the Black Mirror-esque possibilities. The Boston-based construction company Suffolk has turned to artificial intelligence to try and make construction safer. Suffolk has been collaborating with computer vision company Smartvid.io to create a digital watchdog of sorts that uses a deep-learning algorithm and workplace images to flag dangerous situations and workers engaging in hazardous behavior, like failing to wear safety equipment or working too close to machinery. Suffolk’s even managed to get some of their smaller competitors to join them in data sharing, a mutually beneficial arrangement since machine learning systems require so much example data; something that's harder for smaller operations to gather. Suffolk hopes to use this decade’s worth of aggregated information, as well as scheduling data, reports, and info from IoT sensors to create predictive algorithms that will help prevent injuries and accidents before they happen and increase productivity. Newer startups are also entering the AEC AI fray, including three supported by URBAN-X. The bi-coastal Versatile Natures is billing itself as the "world's first onsite data-provider," aiming to transform construction sites with sensors that allow managers to proactively make decisions. Buildstream is embedding equipment and construction machinery to make them communicative, and, by focusing on people instead, Contextere is claiming that their use of the IoT will connect different members of the workforce. At the Florida-based firm Haskell, instead of just using surveillance on the job site, they’re addressing the problem before construction workers even get into the field. While videos and quizzes are one way to train employees, Haskell saw the potential for interactive technologies to really boost employee training in a safe context, using virtual reality. In the search for VR systems that might suit their needs, Haskell discovered no extant solutions were well-suited to the particulars of construction. Along with their venture capital spinoff, Dysruptek, they partnered with software engineering and game design students at Kennesaw State University in Georgia to develop the Hazard Elimination/Risk Oversight program, or HERO, relying on software like Revit and Unity. The video game-like program places users into a job site, derived from images taken by drone and 360-degree cameras at a Florida wastewater treatment plant that Haskell built, and evaluates a trainee’s performance and ability to follow safety protocols in an ever-changing environment. At the Skanska USA, where 360-degree photography, laser scanning, drones, and even virtual reality are becoming increasingly commonplace, employees are realizing the potentials of these new technologies not just for improved efficiency and accuracy in design and construction, but for overall job site safety. Albert Zulps, Skanska’s Regional Director, Virtual Design and Construction, says that the tech goes beyond BIM and design uses, and actively helps avoid accidents. “Having models and being able to plan virtually and communicate is really important,” Zulps explained, noting that in AEC industries, BIM and models are now pretty much universally trusted, but the increased accuracy of capture technologies is making them even more accurate—adapting them to not just predictions, but the realities of the site. “For safety, you can use those models to really clearly plan your daily tasks. You build virtually before you actually build, and then foresee some of the things you might not have if you didn't have that luxury.” Like Suffolk, Skanska has partnered with Smartvid.io to help them process data. As technology continues to evolve, the ever-growing construction industry will hopefully be not just more cost-efficient, but safer overall.
R+D for the Built Environment, is sponsoring a 6-month, paid, off-site design fellowship program starting this summer. We're looking for four candidates in key R+D topic areas:
- Building material science
- 3D printing, robotics, AR/VR
- AI, machine learning, analytics, building intelligence
- Quality housing at a lower cost
- Building resiliency and sustainability
- Workplace optimization
- Adaptable environments
Now active in over 30 countries around the world, French startup Iconem is working to preserve global architectural and urban heritage one photograph at a time. Leveraging complex modeling algorithms, drone technology, cloud computing, and, increasingly, artificial intelligence (AI), the firm has documented major sites like Palmyra and Leptis Magna, producing digital versions of at-risk sites at resolutions never seen, and sharing their many-terabyte models with researchers and with the public in the form of exhibitions, augmented reality experiences, and 1:1 projection installations across the globe. AN spoke with founder and CEO Yves Ubelmann, a trained architect, and CFO Etienne Tellier, who also works closely on exhibition development, about Iconem’s work, technology, and plans for the future. The Architect's Newspaper: Tell me a bit about how Iconem got started and what you do. Yves Ubelmann: I founded Iconem six years ago. At the time I was an architect working in Afghanistan, in Pakistan, in Iran, in Syria. In the field, I was seeing the disappearance of archeological sites and I was concerned by that. I wanted to find a new way to record these sites and to preserve them even if the sites themselves might disappear in the future. The idea behind Iconem was to use new technology like drones and artificial intelligence, as well as more standard digital photography, in order to create a digital copy or model of the site along with partner researchers in these different countries. AN: You mentioned drones and AI; what technology are you using? YU: We have a partnership with a lab in France, the INRIA (Institut National de Recherche en Informatique/National Institute for Research in Computer Science and Automation). They discovered an algorithm that could transform a 2D picture into a 3D point cloud, which is a projection of every pixel of the picture into space. These points in the point cloud in turn reproduce the shape and the color of the environment, the building and so on. It takes billions of points that reproduce the complexity of a place in a photorealistic manner, but because the points are so tiny and so huge a number that you cannot see the point, but you see only the shape on the building in 3D. Etienne Tellier: The generic term for the technology that converts the big datasets of pictures into 3D models is photogrammetry. YU: Which is just one process. Even still, photogrammetry was invented more than 100 years ago…Before it was a manual process and we were only able to reproduce just a part of the wall or something like that. Big data processing has led us to be able to reproduce a huge part of the real environment. It’s a very new way of doing things. Just in the last two years, we’ve become able to make a copy of an entire city—like Mosul or Aleppo—something not even possible before. We also have a platform to manage this huge amount of data and we’re working with cloud computing. In the future we want to open this platform to the public. AN: All of this technology has already grown so quickly. What do you see coming next? YU: Drone technology is becoming more and more efficient. Drones will go farther and farther, because batteries last longer, so we can imagine documenting sites that are not accessible to us, because they're in a rebel zone, for example. Cameras also continue to become better and better. Today we can produce a model with one point for one millimeter and I think in the future we will be able to have ten points for one millimeter. That will enable us to see every detail of something like small writing on a stone. ET: Another possible evolution, and we are already beginning to see this happen thanks to artificial intelligence, is automatic recognition of what is shown by a 3D model. That's something you can already have with 2D pictures. There are algorithms that can analyze a 2D picture and say, "Oh okay, this is a cat. This is a car." Soon there will probably also be the same thing for 3D models, where algorithms will be able to detect the architectural components and features of your 3D model and say, "Okay, this is a Corinthian column. This dates back to the second century BC." And one of the technologies we are working on is the technology to create beautiful images from 3D models. We’ve had difficulties to overcome because our 3D models are huge. As Yves said before, they are composed of billions of points. And for the moment there is no 3D software available on the market that makes it possible to easily manipulate a very big 3D model in order to create computer-generated videos. So what we did is we created our own tool, where we don't have to lower the quality of our 3D models. We can keep the native resolution quality photorealism of our big 3D models, and create very beautiful videos from them that can be as big as a 32K and can be projected onto very big areas. There will be big developments in this field in the future. AN: Speaking of projections, what are your approaches to making your research accessible? Once you've preserved a site, how does it become something that people can experience, whether they're specialists or the public? YU: There are two ways to open this data to the public. The first way is producing digital exhibitions that people can see, which we are currently doing today for many institutions all over the world. The other way is to give access directly to the raw data, from which you can take measurements or investigate a detail of architecture. This platform is open to specialists, to the scientific community, to academics. The first exhibition we did was with the Louvre in Paris at the Grand Palais for an exhibition called Sites Éternels [Eternal Sites] where we projection mapped a huge box, 600 square meters [6,458 square feet], with 3D video. We were able to project monuments like the Damascus Mosque or Palmyra sites and the visitors are surrounded by it at a huge scale. The idea is to reproduce landscape, monuments, at scale of one to one so the visitor feels like they’re inside the sites. AN: So you could project one to one? ET: Yes, we can project one to one. For example, in the exhibition we participated to recently, in L'Institut du monde arabe in Paris, we presented four sites: Palmyra, Aleppo, Mosul, and Leptis Magna in Libya. And often the visitor could see the sites at a one to one scale. Leptis Magna was quite spectacular because people could see the columns at their exact size. It really increased the impact and emotional effect of the exhibition. All of this is very interesting from a cultural standpoint because you can create immersive experiences where the viewer can travel through a whole city. And they can discover not only the city as a whole but also the monuments and the architectural details. They can switch between different scales—the macro scale of a city; the more micro one of the monument; and then the very micro one of a detail—seamlessly. AN: What are you working on now? ET: Recently, we participated in an exhibition that was financed by Microsoft that was held in Paris, at the Musée des Plans-Reliefs, a museum that has replicas of the most important sites in France. They're 3D architectural replicas or maquettes that can be 3 meter [apx. 10 feet] wide that were commissioned by Louis XIV and created during the 17th century because he wanted to have replicas to prepare a defense in case of an invasion. Recently, Microsoft wanted to create an exhibition using augmented reality and they proposed making an experience in this museum in Paris, focusing on the replicas of Mont-Saint-Michel, the famous site in France. We 3D scanned this replica of Mont-Saint-Michel, and also 3D scanned the actual Mont-Saint-Michel, to create an augmented reality experience in partnership with another French startup. We made very precise 3D models of both sites—the replica and the real site—and used the 3D models to create the holograms that were embedded and superimposed. Through headsets visitors would see a hologram of water going up and surrounding the replica of Mont-Saint-Michel. You could see the digital and the physical, the interplay between the two. And you could also see the site as it was hundreds of years before. It was a whole new experience relying on augmented reality and we were really happy to take part in this experience. This exhibition should travel to Seattle soon.
Imagine a world where artificial intelligence tracks your every movement. A world where buildings have minds of their own, learning your behaviors, and collecting data from you as you come and go. While existing technology has not yet reached sci-fi levels, a visit to an Amazon Go grocery store can offer you a peek into this possible future of retail design. This week Amazon announced its plans to open a new store in New York, the first of its kind on the East Coast, before opening nearly 3,000 more nationwide by 2021. The company has already built out six Amazon Go stores in Seattle, Chicago, and San Francisco. The cutting-edge stores, as shown within its first locations, are characterized by visual simplicity, clarity, and hyper-functionality. Through the stores' structural elements, including minimalistic facades, geometric configurations, and exposed raw materials, such as wood veneer and polished concrete, the interiors assume an industrial feel. They feature muted colors and black merchandise racks that give the stores a clean appearance as well. Meanwhile, ceiling cameras monitor shoppers as they wander through the aisles. The stores are unique in that they are void of cashiers, cash registers, and self-service checkout stands. Customers only need to walk in, take what they need, and leave. As they swing through the turnstiles on their way out, Amazon automatically bills their credit cards. Within minutes, a receipt is sent to the Amazon app, giving customers a summary of what they bought, what they paid, and the exact amount of time they spent in the store. The stores, which depend on highly sophisticated image recognition software and artificial intelligence to function, are expected to drastically transform the retail experience in unexpected ways. Amazon began working on retail stores five years ago with the goal of eliminating consumer criticisms and complaints, such as struggling to find products and waiting in long lines. Since the first Amazon Go store opened last January in Seattle, it has received tremendous praise and success. According to CNN, highly automated retail stores like Amazon Go are expected to become the norm within as little as 10 to 15 years. Research has shown that up to 7.5 million retail jobs are at risk of automation in the next decade, which will save retailers money on labor, as well as boost profits, but obviously cost retail workers their livelihood. Automated stores can facilitate the ordering and restocking process as cameras and AI track inventory in real-time. The removal of cash registers provides more space for inventory. Customer data can also be uploaded to the servers of each building, where retailers can present them with personalized discounts, offers, and other incentives. While Amazon has confirmed plans to open an Amazon Go store in New York, its location has yet to be determined.
The encroach of self-driving cars, acrobatic terminators, and decades of media hysterics over the destructive potential of artificial intelligence (AI) have brought questions of robot ethics into the public consciousness. Now, MIT has leaped into the fray and will tackle those issues head-on with the announcement of a new school devoted solely to the study of the opportunities and challenges that the advancement of AI will bring. The new MIT Stephen A. Schwarzman College of Computing, eponymously named after the Blackstone CEO who gave a $350 million foundational grant to launch the endeavor, will be getting its own new headquarters building on the MIT campus. While a large gift, the final cost of establishing the new school has been estimated at a whopping $1 billion, and MIT has reportedly already raised another $300 million for the initiative and is actively fundraising to close the gap. “As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” wrote MIT president L. Rafael Reif in the announcement. “In keeping with the scope of this challenge, we are reshaping MIT. “The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.” As Reif told the New York Times, the goal is to “un-silo” previously self-contained academic disciplines and create a center where biologists, physicists, historians, and any other discipline can research the integration of AI and data science into their field. Rather than offering a standard double-major, the new school will instead integrate computer science into the core of every course offered there. The college will also host forums and advance policy recommendations on the developing field of AI ethics. The Stephen A. Schwarzman College of Computing is set to open in September 2019, and the new building is expected to be complete in 2022. No architect has been announced yet; AN will update this article when more information is available.
For centuries, art collecting and art brokering were comfortably ensconced in the exclusive domain of the super-rich. Now, things could be changing. A start-up called Masterworks is trying to build a stock exchange for high-value art. This summer, Masterworks acquired Claude Monet’s Coup de Vent at an auction for $6.3 million and plans to launch a public offering with individual shares valued at $20. Unfortunately, your fractional ownership won’t allow you to take the painting home and hang it over your mantelpiece, but it will allow you to participate in a highly lucrative and historically-tested investment vehicle that has never been previously available to the masses. According to Masterworks’ website, masterpiece paintings have outpaced growth in the leading U.S. stock exchange index by nearly 300 percent in the last 20 years. Current sales of expensive art are limited by liquidity and transaction protocol. There simply aren’t that many people willing to fork out a hundred million dollars for a Picasso on any given day. By creating an affordable entry point, Masterworks also hopes to help art reach a wider audience while giving the general public more agency in selecting and determining the true value of cultural artifacts. Masterworks art transactions will be recorded through an artificial intelligence (AI) platform that creates transparency and immutability. Ownership can be clearly tracked using smart contracts and digital currencies, allowing for instant transfers across geographic borders without the involvement of banks and auction houses that traditionally charge high fees. New AI-based art platforms like KODAKone, for example, allow artists to register their artwork digitally before entering the art market, providing security to all present and future stakeholders. Ironically, your digital kitten art may one day be more easily authenticated than Leonardo da Vinci’s Salvador Mundi, which recently sold at auction for $450 million. Plus, telling your friends down at the pub that you own a Picasso could be, well…priceless.
If you think that urban planning and computer science go hand in hand, MIT’s new degree may just be the subject for you. The MIT faculty just approved the bachelor of science in urban science and planning with computer science at its May 16 meeting, which will be available to all undergraduates starting from the fall 2018 semester. The new major is offered jointly by the Department of Urban Studies and Planning and the Department of Electrical Engineering and Computer Science. According to a press release, it will combine “urban planning and public policy, design and visualization, data analysis, machine learning, and artificial intelligence, pervasive sensor technology, robotics and other aspects of both computer science and city planning.” Other inventive and multi-disciplinary methods include ethics and geospatial analysis. “The new joint major will provide important and unique opportunities for MIT students to engage deeply in developing the knowledge, skills, and attitudes to be more effective scientists, planners, and policy makers,” says Eran Ben-Joseph, head of the Department of Urban Studies and Planning. “It will incorporate STEM education and research with a humanistic attitude, societal impact, social innovation, and policy change — a novel model for decision making to enable systemic positive change and create a better world. This is really unexplored, fertile new ground for research, education, and practice.” Students will spend time in the urban science synthesis lab, which will be a required component of the degree. Advanced technological tools will become an integral aspect of the exciting learning process.
Meet the incubators and accelerators producing the new guard of design and architecture start-ups. This is part of a series profiling incubators and accelerators from our April 2018 Technology issue. The age of the car as we know it appears to be winding down—that is, if the diverse initiatives started by car companies is any indication. For example, in Greenpoint, Brooklyn, the BMW-owned MINI recently launched A/D/O, a nARCHITECTS-design makerspace and the headquarters of URBAN-X, an accelerator for start-ups seeking to improve urban life. Although URBAN-X is only two years old, the company has hit the ground running thanks to MINI’s partnership with Urban Us, a network of investors focused on funding start-ups that use technology to improve urban living. Through that partnership, URBAN-X is able to use its funding from MINI to take on companies that lack finished products or established customers and then connect them to the Urban Us community. Through a rigorously programmed five-month semester, up to ten start-ups at a time work with in-house engineering, software, marketing, and urbanism experts and given access to the outside funding and political connections that URBAN-X is able to leverage. Competition to join the cohort is fierce, especially since the chosen companies are given $100,000 in initial funding. Architects, planners, urban designers, construction workers, and those with a background in thinking about cities have historically applied. At the time of writing, the third group had just finished its tenure and presented an overview of its work, at A/D/O, at a Demo Day on February 9. The companies have since followed up with whirlwind tours to court investors and realize their ideas. The diversity of projects that have come out of URBAN-X represents the wide-ranging problems that face any modern city. The solutions aren’t entirely infrastructure-based, either. For example, Farmshelf has gained critical acclaim by moving urban farming into sleek, indoor “growing cabinets”; Industrial/Organic is turning decomposing food waste into electricity; and Good Goods has created a platform for smaller retailers to occupy space in large vacancies by pooling money. Ultimately, as cities evolve and become more interconnected, addressing the problems found within them will require ever more complicated and multidisciplinary solutions. The fourth URBAN-X cohort will be announced on May 10, 2018. Notable alumni include: Numina A start-up that uses sensor-integrated streetlights to map traffic patterns. Lunewave A technology company that claims its spherical sensor for self-driving cars is cheaper and more effective than the LiDAR (light detection and ranging) currently in widespread use (likely a win for MINI and BMW). Sencity A platform that encourages human engagement in smart cities. RoadBotics A tool that uses smartphone monitoring to improve road maintenance.0 Qucit This software aggregates urban planning data and uses AI to optimize everything from emergency response times to park planning.
This is the fourth column of “Practice Values,” a bi-monthly series by architect and technologist Phil Bernstein. The column focuses on the evolving role of the architect at the intersection of design and construction, including subjects such as alternative delivery systems and value generation. Bernstein was formerly vice president at Autodesk and now teaches at the Yale School of Architecture. In my last column I explored the potential impacts of next-generation technology—particularly machine intelligence (also known as artificial intelligence or AI) and crowd-sourced knowledge—on the hegemony of professionalism for architects. This question was recently explored further by Daniel Susskind, one of the authors of an Oxford study published in a RIBA journal article entitled “The Way We’ll Work Tomorrow”—which suggested that modern knowledge work, like much of that performed by architects today, should be considered not so much as “by job” as “by task,” and that many of those tasks are likely to be automated in the future. Professions exist to systematize expertise and, by extension, control access to it. Computation democratizes access to that expertise by digitizing and distributing it, but does this lead to an inevitable decline for the need for professionals themselves? Like manufacturing workers in the 20th century, knowledge workers are likely to be “de-skilled” in the 21st, as routine, transactional, and analytical tasks are performed by machine-learning algorithms referencing big data sources, and the need for human abilities for those same chores is eliminated. Just as CAD rendered my once-fearsome hand-drafting skills mostly irrelevant, expert systems may do the same with today’s expertise in, say, cost estimating or construction documentation. Even though architectural design writ large is a profoundly creative act, the more prosaic components—preparing schedules, measuring and calculating, even evaluating performance characteristics like safety or zoning conformance—comprise a sizable portion of the architect’s fee. Production tasks connected to technical documentation alone (think CD phase work) can be as much as 40 percent of compensation on a project. Once this stuff gets automated, will there be much less work, and will we need far fewer architects? Perhaps—unless we find alternate strategies for demonstrating the value of our efforts. Oxford’s Susskind suggests that while the “job of an architect” may be profoundly transformed with technology, the profession should reconsider some of our critical tasks in response. If design processes will inevitably be augmented by computation, we might control our destiny by taking on the problem of creating the resulting computational platforms: engineering knowledge systems and structures, developing workflow protocols for analysis and evaluation, and designing new systems from which design itself can spring. In some sense, this is meta-design—not unlike the work we’ve seen since the advent of BIM that required technology-implementation plans, data standards, and integrated multidisciplinary information flows. Cutting-edge design firms rely heavily on scripts and so-called “generative design” techniques, and what Susskind recommends here is a logical extension of that strategy that augments (rather than replaces) the capabilities of designers. Of course, the same technologies that might appear to be threats to our autonomy as architects could, jujitsu-style, be turned into opportunities. Susskind suggests that automation offers the immediate benefit of making routine activities more efficient; perhaps repurposing those newly found hours means more time to improve design. He further recommends that our knowledge and influence could be magnified via consortia of digitally connected professionals, what he calls “communities of expertise” where the sum is far greater than the individual parts. Author and Harvard architecture professor Peter Rowe once described the design process as dependent upon heuristic reasoning, since all design challenges are complex and somewhat open-ended with ambiguous definitions and indeterminate endpoints, borrowing from sociologist Horst Rittel who characterized these as “wicked problems.” Computers themselves aren’t, at least today, particularly good at heuristics or solving wicked problems, but they are increasingly capable of attacking the “tame” ones, especially those that require the management of complex, interconnected quantitative variables like sustainable performance, construction logistics, and cost estimations. And since clients have a strong interest in seeing those things done well, why not lean into the chance to complement heuristics with some help with the tame, and leverage the resulting value as a result? That architects are so well-suited to the challenges of the wicked problem bodes well for us in the so-called "Second Machine Age," when machines don’t just do things we program them to do, but can learn how to do new things themselves. The essential value of architects as professionals who can understand and evaluate a problem and synthesize unique and insightful solutions will likely remain unchallenged by our computer counterparts in the near future, an argument supported by a 2013 study of job computerization (again, at Oxford) that suggested that “occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two.” Rather than rely upon this vaguely comforting conclusion, our profession must embrace and attack the wicked problem of the future of architecture and computational design and control the future of our profession accordingly. We’ll face far more opportunities than threats from computation if we can.
This fake town by the University of Michigan to become testing ground for developing smarter driverless cars
Researchers the University of Michigan just one-upped a recent virtual SimCity project for testing smart technologies of future cities. A tangible, 32-acre testing ground for driverless cars called MCity pits autonomous vehicles against every conceivable real-life obstacle, minus the caprice of human drivers. The uninhabited town in the university's North Campus Research Complex contains suburban and city roadways, building facades, sidewalks, bike lanes and streetlights. Recreating street conditions in a controlled environment means teaching robotic vehicles to interpret graffiti-defaced road signs, faded line markings, construction obstacles and other quotidian surprises which AI is still ill-equipped to handle. By dint of moveable facades, researchers can create any condition—from blind corners to odd intersections—to develop more conscientious self-driving vehicles. Vehicles will navigate city terrain from dirt to paving brick and gravel roads, decode freeway signs, and make split-second braking and lane-change decisions in a High-Occupancy Vehicle (HOV) lane at peak hours. "We believe that this transformation to connected and automated mobility will be a game changer for safety, for efficiency, for energy, and for accessibility," said Peter Sweatman, director of the U-M Mobility Transformation Center. "Our cities will be much better to live in, our suburbs will be much better to live in. These technologies truly open the door to 21st century mobility." MCity is the first major project of a part governmental, academic, and commercial partnership called the University of Michigan Mobility Transformation Center. The initiative is backed by million-dollar investments from companies like Toyota, Nissan, Ford, GM, Honda, State Farm, Verizon, and Xerox, who will no doubt be affected should driverless cars go mainstream. The testing center is is also tinkering with vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) connectivity to investigate whether it aids individual vehicles in making better decisions. The university aims to eventually deploy 9,000 connected vehicles across the greater Ann Arbor area.
Real-life SimCity in New Mexico to become testing ground for new technologies that will power smart cities
A simulation video game can become a powerful innovation lab for new urban technologies, where researchers can test-drive every outlandish “what-if?” in a controlled environment. The Center for Innovation, Technology and Evaluation is launching a full-scale SimCity—a small, fully functioning ghost town equipped with the technology touted by futurists as the next generation of smart cities. Resembling a modest American town with a population of 35,000 spread over 15 miles, the virtual metropolis is sited on a desolate stretch of land in southern New Mexico. Set to be wired with mock-up utilities and telecommunications systems as realistically as possible, the quintessentially mediocre town will even have a gas station, big box store, and a simulated interstate highway alongside its tall office buildings, parks, houses and churches. The town will also be sectioned into urban, rural and suburban zones. From nuclear war to natural disasters to a stock market crash or a triple whammy of all three, the ho-hum hypothetical town will soon play host to driverless cars and packages delivered by drones, alternative energy power generation and never-before-tested public monitoring, security and computer systems. The goal of CITE is to provide the opportunity to test large-scale technology experimentations in real-world conditions “without anyone getting hurt,” said Bob Brumley, managing director of Pegasus Global Holdings, the Washington state-based technology development firm behind the concept. Brumley estimates that support infrastructure, including electric plants and telecommunications, will take 24 months to create, while the city will be fully built between 2018 and 2020. The uninhabited virtual city affords possibilities to test otherwise non-starter ideas hampered by safety and feasibility concerns in the real world, where human beings are the most fickle of variables. “It will be a true laboratory without complication and safety issues associated with residents. Here you can break things and run into things and get used to how they work before taking them out into the market,” Brumley told Wired. One of numerous experiments he envisions involves deploying a fleet of driverless freight trucks controlled by a centralized wireless network. Testing on a real freeway, on the other hand, would be too hazardous. Other ideas range from simple practicalities—having small drones drop off packages on doorsteps—to cataclysm readiness—simulating, a large-scale, real-time attack on energy, telecommunications and traffic systems, or the effect of a “massive electromagnetic pulse attack on all the integrated circuits in our economy.” Brumley estimates an initial investment of $550–600 million in direct investment, with an estimated total cost of $1 billion over the next five years as the city grows in size and complexity. We can only hope that their servers don’t crash.