Posts tagged with "Artificial Intelligence":

Placeholder Alt Text

Jenny Sabin's installation for Microsoft responds to occupants' emotions

At Microsoft’s Redmond, Washington, campus, architect Jenny Sabin has helped realize a large-scale installation powered by artificial intelligence. Suspended from three points within an atrium, the two-story, 1,800-pound sculpture is a compressive mesh of 895 3D-printed nodes connected by fiberglass rods and arranged in hexagons along with fabric knit from photoluminescent yarn. Created as part of Microsoft’s artist-in-residence program, the project is named Ada, after Ada Lovelace, the English mathematician whose work on the analytical engine laid the groundwork for the invention of computer programming as we know it. Anonymized information is collected from microphones and cameras throughout the building. An AI platform designed by a team led by researcher Daniel McDuff processes this data to try to accurately sense people’s emotions based on visual and sonic cues, like facial movements and voice tone. This data is then synthesized and run through algorithms that create a shifting color gradient that Ada produces from an array of LEDs, fiber optics, and par (can) lights. “To my knowledge, this installation is the first architectural structure to be driven by artificial intelligence in real-time,” Sabin, Microsoft’s current artist in residence, told the company’s AI Blog. Microsoft touts Ada as an example of “embedded intelligence,” AI that’s built-in and responsive to our real-world environment. McDuff also hopes that his emotion tracking technology, as dystopian as it might sound, could have solutions in healthcare or other caregiving situations. (Microsoft employees are able to opt-out of individuated tracking and they assure that all identifying info is removed from the media collected).  Ada is part of a broader push to embed sensing and artificial intelligence into the built environment by Microsoft and many other companies, as well as artistic pavilions that grapple with the future of AI in our built world, like Refik Anadol's recent project at New York's ARTECHOUSE.
Placeholder Alt Text

Aesthetic of Prosthetics compares computer-enhanced design practices

How has contemporary architecture and culture been shaped by our access to digital tools, technologies, and computational devices? This was the central question of Aesthetics of Prosthetics, the Pratt Institute Department of Architecture’s first alumni-run exhibition curated by recent alumni and current students Ceren Arslan, Alican Taylan, Can Imamoglu and Irmak Ciftci. The exhibition, which closed last week, took place at Siegel Gallery in Brooklyn. The curatorial team, made up of current students and recent alumni, staged an open call for submissions that addressed the ubiquity of “prosthetic intelligence” in how we interact with and design the built environment. “We define prosthetic intelligence as any device or tool that enhances our mental environment as opposed to our physical environment," read the curatorial statement. "Here is the simplest everyday example: When at a restaurant with friends, you reach out to your smartphone to do an online search for a reference to further the conversation, you use prosthetic intelligence." As none of the works shown have actually been built, the pieces experimented with the possibilities for representation and fabrication that “prosthetic intelligence” allows. The selected submissions used a range of technologies and methods including photography, digital collage, AI technology, digital modeling, and virtual reality The abundant access to data and its role in shaping architecture and aesthetics was a pervasive theme among the show's participants. Ceren Arslan's Los Angeles, for instance, used photo collage and editing to compile internet-sourced images that create an imaginary, yet believable streetscape. Others speculated about data visualization when drawings are increasingly expected to be read by not only humans, but machines and AI intelligence, as in Brandon Wetzel's deep data drawing.

"The work shown at the exhibition, rather than serving as a speculative criticism pointing out towards a techno-fetishist paradigm, tries to act as recording device to capture a moment in architectural discourse. Both the excitement and skepticism around the presented methodologies are due to the fact that they are yet to come to fruition as built projects," said the curators in a statement. 

Placeholder Alt Text

URBAN-X 6 showcases new tech solutions at A/D/O

This past Thursday, URBAN-X hosted its sixth demo day in Brooklyn at A/D/O, where startups that were showing what Micah Kotch, the startup accelerator's managing director, called “novel solutions to urban life.” URBAN-X, which is organized by MINI, A/D/O’s founder, in partnership with the venture firm Urban Us, began incubating urban-focused startups back in 2016. Previous iterations have seen everything from electric vehicle companies to waste management startups, and for this session, the brief was intentionally broad, said Kotch. On display was everything from machine-learning solutions to building energy management to apps that let people buy leftover prepared food from fast-casual restaurants and cafes to prevent food waste and generate some extra revenue.  Pi-Lit showed off a networked solution to highway and infrastructural safety. Many lives are lost each year as people sit after accidents, or as construction workers operate in dangerous work zones. The California-based company has developed a smart solution of mesh-networked lighting that can be deployed by first responders or work on existing work zone infrastructure. In addition, they’ve developed an array of sensors that can be affixed to bridges, roads, and temporary barriers—which founder Jim Selevan says are prone to impact but without transportation departments being aware, leading to unknown compromises that can cause accidents later on. Sensors could also let relevant parties know if a bridge is vibrating too much, or when roads begin freezing and warnings need to be put out, providing users with “real-time ground truth.” 3AM also presented their plans for using mesh networks, with a focus on safety, as their program relies on drones and portable trackers to help support operational awareness for firefighters. More whimsically, Hubbster showcased their solution—already deployed in Paris and Copenhagen—to support urban play: basically an app-based rental system for basketballs, croquet set, and everything in between, which would deploy from small, battery-powered smart lockboxes. Less glamorously but quite critically, Varuna is trying to make a change in the old-fashioned U.S. water infrastructure system, which exposes as much as 63 percent of the country to unsafe water and largely relies on manual testing, even for federally mandated across-the-board chlorine monitoring. They hope that by introducing AI-equipped sensors to utility systems, U.S. water can be delivered more safely, efficiently, and cheaply, addressing "operational inefficiencies in water supply, outdated tools, and a lack of visibility.” Also working with utilities was the Houston-based Evolve Energy, whose AI behavioral classification solution, currently available in parts of Texas, allows electricity to be bought at wholesale prices at the times of day when it is cheapest, for the comfort and needs individual users value most. For example, a home can pre-cool with cheap electricity and then turn off when prices surge. Variable rates, a la airline tickets, were a common theme—for example, Food for All, an app that is designed to reduce food waste and create extra revenue for fast-casual restaurants, offers flexible pricing for customers to pick up food that might otherwise be tossed. Most relevant to architects, perhaps, were Cove.Tool’s recent updates. The startup reports that they’ve made big strides on their cloud-based app that helps architect’s create efficient buildings. Reportedly cutting down energy grading from tens of hours to mere minutes, the app can now simulate the effects of sunlight—through various types glass—on utility usage, among many other new micro-climatic simulation features.
Placeholder Alt Text

ARTECHOUSE's Chelsea Market space will let visitors experience architectural hallucinations

ARTECHOUSE, a technology-focused art exhibition platform conceived in 2015 by Sandro Kereselidze and Tati Pastukhova, has been presenting digitally inspired art in Washington D.C. and Miami. Now they’re coming to New York, “a clear next step for [their] mission,” with an inaugural exhibition by Refik Anadol. The Istanbul-born, Los Angeles-based Anadol is known for his light and projection installations that often have an architectural component, such as the recent animation projected on the facade of the Frank Gehry-designed Walt Disney Concert Hall. For ARTECHOUSE in New York (also Anadol’s first large exhibition in New York),  he’ll be presenting Machine Hallucination. The installation will create what he calls “architectural hallucinations” that are derived from millions of images processed by artificial intelligence and machine learning algorithms. “With Refik, it’s been a collaborative process for over a year and a half, bringing a new commission, Machine Hallucination to life,” explained Kereselidze and Pastukhova. “We have worked closely with Refik to develop the concept for this exciting new work, thinking carefully about how to most effectively utilize and explore our Chelsea Market space.” ARTECHOUSE is especially suited to visualizing Refik’s “data universe” with a floor-to-ceiling, room-wrapping 16K laser projector that the creators claim features “the largest seamless megapixel count in the world,” along with 32-channel sound from L-ISA. The more than 3 million photos, representing numerous architectural styles and movements, will be made to expose (or generate) latent connections between these representations of architectural history, generating “hallucinations” that challenge our notions of space and how we experience it—and providing insight into how machines might experience space themselves. It makes us consider what happens when architecture becomes information. Of the work, Anadol said, “By employing machine intelligence to help narrate the hybrid relationship between architecture and our perception of time and space, Machine Hallucination offers the audience a glimpse into the future of architecture itself.” Machine Hallucination will inhabit the new 6,000-square-foot ARTECHOUSE space in Chelsea Market, located in an over-century-old former boiler room which features exposed brick walls and a refurbished terracotta ceiling, which according to its creators, “supplies each artist with a unique canvas and the ability to drive narratives connecting the old and new.” ARTECHOUSE will be opening to the public early next month.
Placeholder Alt Text

How can new technologies make construction safer?

Construction remains one of the most dangerous careers in the United States. To stop accidents before they happen, construction companies are turning to emerging technologies to improve workplace safety—from virtual reality, drone photography, IoT-connected tools, and machine learning. That said, some solutions come with the looming specter of workplace surveillance in the name of safety, with all of the Black Mirror-esque possibilities. The Boston-based construction company Suffolk has turned to artificial intelligence to try and make construction safer. Suffolk has been collaborating with computer vision company Smartvid.io to create a digital watchdog of sorts that uses a deep-learning algorithm and workplace images to flag dangerous situations and workers engaging in hazardous behavior, like failing to wear safety equipment or working too close to machinery. Suffolk’s even managed to get some of their smaller competitors to join them in data sharing, a mutually beneficial arrangement since machine learning systems require so much example data; something that's harder for smaller operations to gather. Suffolk hopes to use this decade’s worth of aggregated information, as well as scheduling data, reports, and info from IoT sensors to create predictive algorithms that will help prevent injuries and accidents before they happen and increase productivity. Newer startups are also entering the AEC AI fray, including three supported by URBAN-X. The bi-coastal Versatile Natures is billing itself as the "world's first onsite data-provider," aiming to transform construction sites with sensors that allow managers to proactively make decisions. Buildstream is embedding equipment and construction machinery to make them communicative, and, by focusing on people instead, Contextere is claiming that their use of the IoT will connect different members of the workforce. At the Florida-based firm Haskell, instead of just using surveillance on the job site, they’re addressing the problem before construction workers even get into the field. While videos and quizzes are one way to train employees, Haskell saw the potential for interactive technologies to really boost employee training in a safe context, using virtual reality. In the search for VR systems that might suit their needs, Haskell discovered no extant solutions were well-suited to the particulars of construction. Along with their venture capital spinoff, Dysruptek, they partnered with software engineering and game design students at Kennesaw State University in Georgia to develop the Hazard Elimination/Risk Oversight program, or HERO, relying on software like Revit and Unity. The video game-like program places users into a job site, derived from images taken by drone and 360-degree cameras at a Florida wastewater treatment plant that Haskell built, and evaluates a trainee’s performance and ability to follow safety protocols in an ever-changing environment. At the Skanska USA, where 360-degree photography, laser scanning, drones, and even virtual reality are becoming increasingly commonplace, employees are realizing the potentials of these new technologies not just for improved efficiency and accuracy in design and construction, but for overall job site safety. Albert Zulps, Skanska’s Regional Director, Virtual Design and Construction, says that the tech goes beyond BIM and design uses, and actively helps avoid accidents. “Having models and being able to plan virtually and communicate is really important,” Zulps explained, noting that in AEC industries, BIM and models are now pretty much universally trusted, but the increased accuracy of capture technologies is making them even more accurate—adapting them to not just predictions, but the realities of the site. “For safety, you can use those models to really clearly plan your daily tasks. You build virtually before you actually build, and then foresee some of the things you might not have if you didn't have that luxury.” Like Suffolk, Skanska has partnered with Smartvid.io to help them process data. As technology continues to evolve, the ever-growing construction industry will hopefully be not just more cost-efficient, but safer overall.

Open Call: R+D for the Built Environment Design Fellowship

R+D for the Built Environment, is sponsoring a 6-month, paid, off-site design fellowship program starting this summer. We're looking for four candidates in key R+D topic areas:
  1. Building material science
  2. 3D printing, robotics, AR/VR
  3. AI, machine learning, analytics, building intelligence
  4. Quality housing at a lower cost
  5. Building resiliency and sustainability
  6. Workplace optimization
  7. Adaptable environments
We're excited to support up-and-coming designers, engineers, researchers (and all the disciplines in between!) advance their work and provide them with a platform to share their ideas. Follow the link below for more details and instructions on how to apply. Applications are due by May 31, 2019. https://sites.google.com/view/rdbe-design-fellowship-2019/home  
Placeholder Alt Text

A French startup is using drones and AI to save the world's architectural heritage

Now active in over 30 countries around the world, French startup Iconem is working to preserve global architectural and urban heritage one photograph at a time. Leveraging complex modeling algorithms, drone technology, cloud computing, and, increasingly, artificial intelligence (AI), the firm has documented major sites like Palmyra and Leptis Magna, producing digital versions of at-risk sites at resolutions never seen, and sharing their many-terabyte models with researchers and with the public in the form of exhibitions, augmented reality experiences, and 1:1 projection installations across the globe. AN spoke with founder and CEO Yves Ubelmann, a trained architect, and CFO Etienne Tellier, who also works closely on exhibition development, about Iconem’s work, technology, and plans for the future. The Architect's Newspaper: Tell me a bit about how Iconem got started and what you do. Yves Ubelmann: I founded Iconem six years ago. At the time I was an architect working in Afghanistan, in Pakistan, in Iran, in Syria. In the field, I was seeing the disappearance of archeological sites and I was concerned by that. I wanted to find a new way to record these sites and to preserve them even if the sites themselves might disappear in the future. The idea behind Iconem was to use new technology like drones and artificial intelligence, as well as more standard digital photography, in order to create a digital copy or model of the site along with partner researchers in these different countries. AN: You mentioned drones and AI; what technology are you using? YU: We have a partnership with a lab in France, the INRIA (Institut National de Recherche en Informatique/National Institute for Research in Computer Science and Automation). They discovered an algorithm that could transform a 2D picture into a 3D point cloud, which is a projection of every pixel of the picture into space. These points in the point cloud in turn reproduce the shape and the color of the environment, the building and so on. It takes billions of points that reproduce the complexity of a place in a photorealistic manner, but because the points are so tiny and so huge a number that you cannot see the point, but you see only the shape on the building in 3D. Etienne Tellier: The generic term for the technology that converts the big datasets of pictures into 3D models is photogrammetry. YU: Which is just one process. Even still, photogrammetry was invented more than 100 years ago…Before it was a manual process and we were only able to reproduce just a part of the wall or something like that. Big data processing has led us to be able to reproduce a huge part of the real environment. It’s a very new way of doing things. Just in the last two years, we’ve become able to make a copy of an entire city—like Mosul or Aleppo—something not even possible before. We also have a platform to manage this huge amount of data and we’re working with cloud computing. In the future we want to open this platform to the public. AN: All of this technology has already grown so quickly. What do you see coming next? YU: Drone technology is becoming more and more efficient. Drones will go farther and farther, because batteries last longer, so we can imagine documenting sites that are not accessible to us, because they're in a rebel zone, for example. Cameras also continue to become better and better. Today we can produce a model with one point for one millimeter and I think in the future we will be able to have ten points for one millimeter. That will enable us to see every detail of something like small writing on a stone. ET: Another possible evolution, and we are already beginning to see this happen thanks to artificial intelligence, is automatic recognition of what is shown by a 3D model. That's something you can already have with 2D pictures. There are algorithms that can analyze a 2D picture and say, "Oh okay, this is a cat. This is a car." Soon there will probably also be the same thing for 3D models, where algorithms will be able to detect the architectural components and features of your 3D model and say, "Okay, this is a Corinthian column. This dates back to the second century BC." And one of the technologies we are working on is the technology to create beautiful images from 3D models. We’ve had difficulties to overcome because our 3D models are huge. As Yves said before, they are composed of billions of points. And for the moment there is no 3D software available on the market that makes it possible to easily manipulate a very big 3D model in order to create computer-generated videos. So what we did is we created our own tool, where we don't have to lower the quality of our 3D models. We can keep the native resolution quality photorealism of our big 3D models, and create very beautiful videos from them that can be as big as a 32K and can be projected onto very big areas. There will be big developments in this field in the future. AN: Speaking of projections, what are your approaches to making your research accessible? Once you've preserved a site, how does it become something that people can experience, whether they're specialists or the public? YU: There are two ways to open this data to the public. The first way is producing digital exhibitions that people can see, which we are currently doing today for many institutions all over the world. The other way is to give access directly to the raw data, from which you can take measurements or investigate a detail of architecture. This platform is open to specialists, to the scientific community, to academics. The first exhibition we did was with the Louvre in Paris at the Grand Palais for an exhibition called Sites Éternels [Eternal Sites] where we projection mapped a huge box, 600 square meters [6,458 square feet], with 3D video. We were able to project monuments like the Damascus Mosque or Palmyra sites and the visitors are surrounded by it at a huge scale. The idea is to reproduce landscape, monuments, at scale of one to one so the visitor feels like they’re inside the sites. AN: So you could project one to one? ET: Yes, we can project one to one. For example, in the exhibition we participated to recently, in L'Institut du monde arabe in Paris, we presented four sites: Palmyra, Aleppo, Mosul, and Leptis Magna in Libya. And often the visitor could see the sites at a one to one scale. Leptis Magna was quite spectacular because people could see the columns at their exact size. It really increased the impact and emotional effect of the exhibition. All of this is very interesting from a cultural standpoint because you can create immersive experiences where the viewer can travel through a whole city. And they can discover not only the city as a whole but also the monuments and the architectural details. They can switch between different scales—the macro scale of a city; the more micro one of the monument; and then the very micro one of a detail—seamlessly. AN: What are you working on now? ET: Recently, we participated in an exhibition that was financed by Microsoft that was held in Paris, at the Musée des Plans-Reliefs, a museum that has replicas of the most important sites in France. They're 3D architectural replicas or maquettes that can be 3 meter [apx. 10 feet] wide that were commissioned by Louis XIV and created during the 17th century because he wanted to have replicas to prepare a defense in case of an invasion. Recently, Microsoft wanted to create an exhibition using augmented reality and they proposed making an experience in this museum in Paris, focusing on the replicas of Mont-Saint-Michel, the famous site in France. We 3D scanned this replica of Mont-Saint-Michel, and also 3D scanned the actual Mont-Saint-Michel, to create an augmented reality experience in partnership with another French startup. We made very precise 3D models of both sites—the replica and the real site—and used the 3D models to create the holograms that were embedded and superimposed. Through headsets visitors would see a hologram of water going up and surrounding the replica of Mont-Saint-Michel. You could see the digital and the physical, the interplay between the two. And you could also see the site as it was hundreds of years before. It was a whole new experience relying on augmented reality and we were really happy to take part in this experience. This exhibition should travel to Seattle soon.
Placeholder Alt Text

Amazon is bringing its seamless automated grocery store to New York

Imagine a world where artificial intelligence tracks your every movement. A world where buildings have minds of their own, learning your behaviors, and collecting data from you as you come and go. While existing technology has not yet reached sci-fi levels, a visit to an Amazon Go grocery store can offer you a peek into this possible future of retail design. This week Amazon announced its plans to open a new store in New York, the first of its kind on the East Coast, before opening nearly 3,000 more nationwide by 2021. The company has already built out six Amazon Go stores in Seattle, Chicago, and San Francisco. The cutting-edge stores, as shown within its first locations, are characterized by visual simplicity, clarity, and hyper-functionality. Through the stores' structural elements, including minimalistic facades, geometric configurations, and exposed raw materials, such as wood veneer and polished concrete, the interiors assume an industrial feel. They feature muted colors and black merchandise racks that give the stores a clean appearance as well. Meanwhile, ceiling cameras monitor shoppers as they wander through the aisles. The stores are unique in that they are void of cashiers, cash registers, and self-service checkout stands. Customers only need to walk in, take what they need, and leave. As they swing through the turnstiles on their way out, Amazon automatically bills their credit cards. Within minutes, a receipt is sent to the Amazon app, giving customers a summary of what they bought, what they paid, and the exact amount of time they spent in the store. The stores, which depend on highly sophisticated image recognition software and artificial intelligence to function, are expected to drastically transform the retail experience in unexpected ways. Amazon began working on retail stores five years ago with the goal of eliminating consumer criticisms and complaints, such as struggling to find products and waiting in long lines. Since the first Amazon Go store opened last January in Seattle, it has received tremendous praise and success. According to CNN, highly automated retail stores like Amazon Go are expected to become the norm within as little as 10 to 15 years. Research has shown that up to 7.5 million retail jobs are at risk of automation in the next decade, which will save retailers money on labor, as well as boost profits, but obviously cost retail workers their livelihood. Automated stores can facilitate the ordering and restocking process as cameras and AI track inventory in real-time. The removal of cash registers provides more space for inventory. Customer data can also be uploaded to the servers of each building, where retailers can present them with personalized discounts, offers, and other incentives. While Amazon has confirmed plans to open an Amazon Go store in New York, its location has yet to be determined.
Placeholder Alt Text

MIT announces $1 billion campus focused on AI advancement

The encroach of self-driving cars, acrobatic terminators, and decades of media hysterics over the destructive potential of artificial intelligence (AI) have brought questions of robot ethics into the public consciousness. Now, MIT has leaped into the fray and will tackle those issues head-on with the announcement of a new school devoted solely to the study of the opportunities and challenges that the advancement of AI will bring. The new MIT Stephen A. Schwarzman College of Computing, eponymously named after the Blackstone CEO who gave a $350 million foundational grant to launch the endeavor, will be getting its own new headquarters building on the MIT campus. While a large gift, the final cost of establishing the new school has been estimated at a whopping $1 billion, and MIT has reportedly already raised another $300 million for the initiative and is actively fundraising to close the gap. “As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” wrote MIT president L. Rafael Reif in the announcement. “In keeping with the scope of this challenge, we are reshaping MIT. “The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.” As Reif told the New York Times, the goal is to “un-silo” previously self-contained academic disciplines and create a center where biologists, physicists, historians, and any other discipline can research the integration of AI and data science into their field. Rather than offering a standard double-major, the new school will instead integrate computer science into the core of every course offered there. The college will also host forums and advance policy recommendations on the developing field of AI ethics. The Stephen A. Schwarzman College of Computing is set to open in September 2019, and the new building is expected to be complete in 2022. No architect has been announced yet; AN will update this article when more information is available.
Placeholder Alt Text

New startup is building a stock exchange for high-value art

For centuries, art collecting and art brokering were comfortably ensconced in the exclusive domain of the super-rich. Now, things could be changing. A start-up called Masterworks is trying to build a stock exchange for high-value art. This summer, Masterworks acquired Claude Monet’s Coup de Vent at an auction for $6.3 million and plans to launch a public offering with individual shares valued at $20. Unfortunately, your fractional ownership won’t allow you to take the painting home and hang it over your mantelpiece, but it will allow you to participate in a highly lucrative and historically-tested investment vehicle that has never been previously available to the masses. According to Masterworks’ website, masterpiece paintings have outpaced growth in the leading U.S. stock exchange index by nearly 300 percent in the last 20 years. Current sales of expensive art are limited by liquidity and transaction protocol. There simply aren’t that many people willing to fork out a hundred million dollars for a Picasso on any given day. By creating an affordable entry point, Masterworks also hopes to help art reach a wider audience while giving the general public more agency in selecting and determining the true value of cultural artifacts. Masterworks art transactions will be recorded through an artificial intelligence (AI) platform that creates transparency and immutability. Ownership can be clearly tracked using smart contracts and digital currencies, allowing for instant transfers across geographic borders without the involvement of banks and auction houses that traditionally charge high fees. New AI-based art platforms like KODAKone, for example, allow artists to register their artwork digitally before entering the art market, providing security to all present and future stakeholders. Ironically, your digital kitten art may one day be more easily authenticated than Leonardo da Vinci’s Salvador Mundi, which recently sold at auction for $450 million.  Plus, telling your friends down at the pub that you own a Picasso could be, well…priceless.
Placeholder Alt Text

MIT approves new degree combining urban planning and computer science

If you think that urban planning and computer science go hand in hand, MIT’s new degree may just be the subject for you. The MIT faculty just approved the bachelor of science in urban science and planning with computer science at its May 16 meeting, which will be available to all undergraduates starting from the fall 2018 semester. The new major is offered jointly by the Department of Urban Studies and Planning and the Department of Electrical Engineering and Computer Science. According to a press release, it will combine “urban planning and public policy, design and visualization, data analysis, machine learning, and artificial intelligence, pervasive sensor technology, robotics and other aspects of both computer science and city planning.” Other inventive and multi-disciplinary methods include ethics and geospatial analysis. “The new joint major will provide important and unique opportunities for MIT students to engage deeply in developing the knowledge, skills, and attitudes to be more effective scientists, planners, and policy makers,” says Eran Ben-Joseph, head of the Department of Urban Studies and Planning. “It will incorporate STEM education and research with a humanistic attitude, societal impact, social innovation, and policy change — a novel model for decision making to enable systemic positive change and create a better world. This is really unexplored, fertile new ground for research, education, and practice.” Students will spend time in the urban science synthesis lab, which will be a required component of the degree. Advanced technological tools will become an integral aspect of the exciting learning process.
Placeholder Alt Text

URBAN-X accelerator wants to transform cities, one semester at a time

Meet the incubators and accelerators producing the new guard of design and architecture start-ups. This is part of a series profiling incubators and accelerators from our April 2018 Technology issue.  The age of the car as we know it appears to be winding down—that is, if the diverse initiatives started by car companies is any indication. For example, in Greenpoint, Brooklyn, the BMW-owned MINI recently launched A/D/O, a nARCHITECTS-design makerspace and the headquarters of URBAN-X, an accelerator for start-ups seeking to improve urban life. Although URBAN-X is only two years old, the company has hit the ground running thanks to MINI’s partnership with Urban Us, a network of investors focused on funding start-ups that use technology to improve urban living. Through that partnership, URBAN-X is able to use its funding from MINI to take on companies that lack finished products or established customers and then connect them to the Urban Us community. Through a rigorously programmed five-month semester, up to ten start-ups at a time work with in-house engineering, software, marketing, and urbanism experts and given access to the outside funding and political connections that URBAN-X is able to leverage. Competition to join the cohort is fierce, especially since the chosen companies are given $100,000 in initial funding. Architects, planners, urban designers, construction workers, and those with a background in thinking about cities have historically applied. At the time of writing, the third group had just finished its tenure and presented an overview of its work, at A/D/O, at a Demo Day on February 9. The companies have since followed up with whirlwind tours to court investors and realize their ideas. The diversity of projects that have come out of URBAN-X represents the wide-ranging problems that face any modern city. The solutions aren’t entirely infrastructure-based, either. For example, Farmshelf has gained critical acclaim by moving urban farming into sleek, indoor “growing cabinets”; Industrial/Organic is turning decomposing food waste into electricity; and Good Goods has created a platform for smaller retailers to occupy space in large vacancies by pooling money. Ultimately, as cities evolve and become more interconnected, addressing the problems found within them will require ever more complicated and multidisciplinary solutions. The fourth URBAN-X cohort will be announced on May 10, 2018. Notable alumni include: Numina A start-up that uses sensor-integrated streetlights to map traffic patterns. Lunewave A technology company that claims its spherical sensor for self-driving cars is cheaper and more effective than the LiDAR (light detection and ranging) currently in widespread use (likely a win for MINI and BMW). Sencity A platform that encourages human engagement in smart cities. RoadBotics A tool that uses smartphone monitoring to improve road maintenance.0 Qucit This software aggregates urban planning data and uses AI to optimize everything from emergency response times to park planning.