If you think that urban planning and computer science go hand in hand, MIT’s new degree may just be the subject for you. The MIT faculty just approved the bachelor of science in urban science and planning with computer science at its May 16 meeting, which will be available to all undergraduates starting from the fall 2018 semester. The new major is offered jointly by the Department of Urban Studies and Planning and the Department of Electrical Engineering and Computer Science. According to a press release, it will combine “urban planning and public policy, design and visualization, data analysis, machine learning, and artificial intelligence, pervasive sensor technology, robotics and other aspects of both computer science and city planning.” Other inventive and multi-disciplinary methods include ethics and geospatial analysis. “The new joint major will provide important and unique opportunities for MIT students to engage deeply in developing the knowledge, skills, and attitudes to be more effective scientists, planners, and policy makers,” says Eran Ben-Joseph, head of the Department of Urban Studies and Planning. “It will incorporate STEM education and research with a humanistic attitude, societal impact, social innovation, and policy change — a novel model for decision making to enable systemic positive change and create a better world. This is really unexplored, fertile new ground for research, education, and practice.” Students will spend time in the urban science synthesis lab, which will be a required component of the degree. Advanced technological tools will become an integral aspect of the exciting learning process.
Posts tagged with "artificial intelligence":
Meet the incubators and accelerators producing the new guard of design and architecture start-ups. This is part of a series profiling incubators and accelerators from our April 2018 Technology issue. The age of the car as we know it appears to be winding down—that is, if the diverse initiatives started by car companies is any indication. For example, in Greenpoint, Brooklyn, the BMW-owned MINI recently launched A/D/O, a nARCHITECTS-design makerspace and the headquarters of URBAN-X, an accelerator for start-ups seeking to improve urban life. Although URBAN-X is only two years old, the company has hit the ground running thanks to MINI’s partnership with Urban Us, a network of investors focused on funding start-ups that use technology to improve urban living. Through that partnership, URBAN-X is able to use its funding from MINI to take on companies that lack finished products or established customers and then connect them to the Urban Us community. Through a rigorously programmed five-month semester, up to ten start-ups at a time work with in-house engineering, software, marketing, and urbanism experts and given access to the outside funding and political connections that URBAN-X is able to leverage. Competition to join the cohort is fierce, especially since the chosen companies are given $100,000 in initial funding. Architects, planners, urban designers, construction workers, and those with a background in thinking about cities have historically applied. At the time of writing, the third group had just finished its tenure and presented an overview of its work, at A/D/O, at a Demo Day on February 9. The companies have since followed up with whirlwind tours to court investors and realize their ideas. The diversity of projects that have come out of URBAN-X represents the wide-ranging problems that face any modern city. The solutions aren’t entirely infrastructure-based, either. For example, Farmshelf has gained critical acclaim by moving urban farming into sleek, indoor “growing cabinets”; Industrial/Organic is turning decomposing food waste into electricity; and Good Goods has created a platform for smaller retailers to occupy space in large vacancies by pooling money. Ultimately, as cities evolve and become more interconnected, addressing the problems found within them will require ever more complicated and multidisciplinary solutions. The fourth URBAN-X cohort will be announced on May 10, 2018. Notable alumni include: Numina A start-up that uses sensor-integrated streetlights to map traffic patterns. Lunewave A technology company that claims its spherical sensor for self-driving cars is cheaper and more effective than the LiDAR (light detection and ranging) currently in widespread use (likely a win for MINI and BMW). Sencity A platform that encourages human engagement in smart cities. RoadBotics A tool that uses smartphone monitoring to improve road maintenance.0 Qucit This software aggregates urban planning data and uses AI to optimize everything from emergency response times to park planning.
This is the fourth column of “Practice Values,” a bi-monthly series by architect and technologist Phil Bernstein. The column focuses on the evolving role of the architect at the intersection of design and construction, including subjects such as alternative delivery systems and value generation. Bernstein was formerly vice president at Autodesk and now teaches at the Yale School of Architecture. In my last column I explored the potential impacts of next-generation technology—particularly machine intelligence (also known as artificial intelligence or AI) and crowd-sourced knowledge—on the hegemony of professionalism for architects. This question was recently explored further by Daniel Susskind, one of the authors of an Oxford study published in a RIBA journal article entitled “The Way We’ll Work Tomorrow”—which suggested that modern knowledge work, like much of that performed by architects today, should be considered not so much as “by job” as “by task,” and that many of those tasks are likely to be automated in the future. Professions exist to systematize expertise and, by extension, control access to it. Computation democratizes access to that expertise by digitizing and distributing it, but does this lead to an inevitable decline for the need for professionals themselves? Like manufacturing workers in the 20th century, knowledge workers are likely to be “de-skilled” in the 21st, as routine, transactional, and analytical tasks are performed by machine-learning algorithms referencing big data sources, and the need for human abilities for those same chores is eliminated. Just as CAD rendered my once-fearsome hand-drafting skills mostly irrelevant, expert systems may do the same with today’s expertise in, say, cost estimating or construction documentation. Even though architectural design writ large is a profoundly creative act, the more prosaic components—preparing schedules, measuring and calculating, even evaluating performance characteristics like safety or zoning conformance—comprise a sizable portion of the architect’s fee. Production tasks connected to technical documentation alone (think CD phase work) can be as much as 40 percent of compensation on a project. Once this stuff gets automated, will there be much less work, and will we need far fewer architects? Perhaps—unless we find alternate strategies for demonstrating the value of our efforts. Oxford’s Susskind suggests that while the “job of an architect” may be profoundly transformed with technology, the profession should reconsider some of our critical tasks in response. If design processes will inevitably be augmented by computation, we might control our destiny by taking on the problem of creating the resulting computational platforms: engineering knowledge systems and structures, developing workflow protocols for analysis and evaluation, and designing new systems from which design itself can spring. In some sense, this is meta-design—not unlike the work we’ve seen since the advent of BIM that required technology-implementation plans, data standards, and integrated multidisciplinary information flows. Cutting-edge design firms rely heavily on scripts and so-called “generative design” techniques, and what Susskind recommends here is a logical extension of that strategy that augments (rather than replaces) the capabilities of designers. Of course, the same technologies that might appear to be threats to our autonomy as architects could, jujitsu-style, be turned into opportunities. Susskind suggests that automation offers the immediate benefit of making routine activities more efficient; perhaps repurposing those newly found hours means more time to improve design. He further recommends that our knowledge and influence could be magnified via consortia of digitally connected professionals, what he calls “communities of expertise” where the sum is far greater than the individual parts. Author and Harvard architecture professor Peter Rowe once described the design process as dependent upon heuristic reasoning, since all design challenges are complex and somewhat open-ended with ambiguous definitions and indeterminate endpoints, borrowing from sociologist Horst Rittel who characterized these as “wicked problems.” Computers themselves aren’t, at least today, particularly good at heuristics or solving wicked problems, but they are increasingly capable of attacking the “tame” ones, especially those that require the management of complex, interconnected quantitative variables like sustainable performance, construction logistics, and cost estimations. And since clients have a strong interest in seeing those things done well, why not lean into the chance to complement heuristics with some help with the tame, and leverage the resulting value as a result? That architects are so well-suited to the challenges of the wicked problem bodes well for us in the so-called "Second Machine Age," when machines don’t just do things we program them to do, but can learn how to do new things themselves. The essential value of architects as professionals who can understand and evaluate a problem and synthesize unique and insightful solutions will likely remain unchallenged by our computer counterparts in the near future, an argument supported by a 2013 study of job computerization (again, at Oxford) that suggested that “occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two.” Rather than rely upon this vaguely comforting conclusion, our profession must embrace and attack the wicked problem of the future of architecture and computational design and control the future of our profession accordingly. We’ll face far more opportunities than threats from computation if we can.
This fake town by the University of Michigan to become testing ground for developing smarter driverless cars
Researchers the University of Michigan just one-upped a recent virtual SimCity project for testing smart technologies of future cities. A tangible, 32-acre testing ground for driverless cars called MCity pits autonomous vehicles against every conceivable real-life obstacle, minus the caprice of human drivers. The uninhabited town in the university's North Campus Research Complex contains suburban and city roadways, building facades, sidewalks, bike lanes and streetlights. Recreating street conditions in a controlled environment means teaching robotic vehicles to interpret graffiti-defaced road signs, faded line markings, construction obstacles and other quotidian surprises which AI is still ill-equipped to handle. By dint of moveable facades, researchers can create any condition—from blind corners to odd intersections—to develop more conscientious self-driving vehicles. Vehicles will navigate city terrain from dirt to paving brick and gravel roads, decode freeway signs, and make split-second braking and lane-change decisions in a High-Occupancy Vehicle (HOV) lane at peak hours. "We believe that this transformation to connected and automated mobility will be a game changer for safety, for efficiency, for energy, and for accessibility," said Peter Sweatman, director of the U-M Mobility Transformation Center. "Our cities will be much better to live in, our suburbs will be much better to live in. These technologies truly open the door to 21st century mobility." MCity is the first major project of a part governmental, academic, and commercial partnership called the University of Michigan Mobility Transformation Center. The initiative is backed by million-dollar investments from companies like Toyota, Nissan, Ford, GM, Honda, State Farm, Verizon, and Xerox, who will no doubt be affected should driverless cars go mainstream. The testing center is is also tinkering with vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) connectivity to investigate whether it aids individual vehicles in making better decisions. The university aims to eventually deploy 9,000 connected vehicles across the greater Ann Arbor area.
Real-life SimCity in New Mexico to become testing ground for new technologies that will power smart cities
A simulation video game can become a powerful innovation lab for new urban technologies, where researchers can test-drive every outlandish “what-if?” in a controlled environment. The Center for Innovation, Technology and Evaluation is launching a full-scale SimCity—a small, fully functioning ghost town equipped with the technology touted by futurists as the next generation of smart cities. Resembling a modest American town with a population of 35,000 spread over 15 miles, the virtual metropolis is sited on a desolate stretch of land in southern New Mexico. Set to be wired with mock-up utilities and telecommunications systems as realistically as possible, the quintessentially mediocre town will even have a gas station, big box store, and a simulated interstate highway alongside its tall office buildings, parks, houses and churches. The town will also be sectioned into urban, rural and suburban zones. From nuclear war to natural disasters to a stock market crash or a triple whammy of all three, the ho-hum hypothetical town will soon play host to driverless cars and packages delivered by drones, alternative energy power generation and never-before-tested public monitoring, security and computer systems. The goal of CITE is to provide the opportunity to test large-scale technology experimentations in real-world conditions “without anyone getting hurt,” said Bob Brumley, managing director of Pegasus Global Holdings, the Washington state-based technology development firm behind the concept. Brumley estimates that support infrastructure, including electric plants and telecommunications, will take 24 months to create, while the city will be fully built between 2018 and 2020. The uninhabited virtual city affords possibilities to test otherwise non-starter ideas hampered by safety and feasibility concerns in the real world, where human beings are the most fickle of variables. “It will be a true laboratory without complication and safety issues associated with residents. Here you can break things and run into things and get used to how they work before taking them out into the market,” Brumley told Wired. One of numerous experiments he envisions involves deploying a fleet of driverless freight trucks controlled by a centralized wireless network. Testing on a real freeway, on the other hand, would be too hazardous. Other ideas range from simple practicalities—having small drones drop off packages on doorsteps—to cataclysm readiness—simulating, a large-scale, real-time attack on energy, telecommunications and traffic systems, or the effect of a “massive electromagnetic pulse attack on all the integrated circuits in our economy.” Brumley estimates an initial investment of $550–600 million in direct investment, with an estimated total cost of $1 billion over the next five years as the city grows in size and complexity. We can only hope that their servers don’t crash.