At the Valerie C. Woodard Center, a community resource center in Charlotte, North Carolina, a new pavilion seems to rise right out of the earth. Called Pillars of Dreams, the continuous 26-foot-tall cloud-like structure is the creation of MARC FORNES / THEVERYMANY, which is known for its complex, computationally-designed structures made of interlocking linear panels or "stripes." In Pillars of Dreams, as with other of the firm's projects, these stripes function not as just exterior and interior walls, but as the structure itself. “The skin of the project is everything—it’s your envelope, your experience, and foremost your structure,” explained Marc Fornes. “All projects we do are creating structure through geometry—self-supported structures.” Pillars of Dream is constructed with stripes of ultra-thin aluminum sheets, laser-cut into “labyrinthine” bands of 3-millimeter, two-layer stripes. “It’s actually a giant 3D puzzle," Fornes said. The design process for Pillars of Dreams represents a continuation and an evolution of more than 15 years of practice, which was inspired originally as a reaction to the triangle-driven geometries used in the 1990s and 2000s to develop complex architectural forms. This approach, which mirrored the polygon meshes of some digital models, resulted in a huge number of panels that take a great amount of time to assemble. The goal of working with the stripes is to speed up construction and have fewer elements to work with. Pillars of Dreams was created to inspire visitors to “carry on a sort of dreaming, escapism.” It operates at a variety of scales—appearing one way to someone driving by, another as it’s approached, and then surprises visitors by allowing them to enter its interior, the colored gradient that is created by different shades and shapes inside and out evolving as one gets closer. It is, according to Fornes, “a universe curved in all directions.”
Posts tagged with "Computational Design":
What if we could “breed” buildings to be more efficient? That’s the provocation by artist, designer, and programmer Joel Simon, who was inspired by the potentials of 3D printing and other emergent digital manufacturing technologies, as well as his background in computer science and biology, to test a system of automated planning. With a series of algorithms of two types—“graph-contraction and ant-colony pathing”—Simon is able to “evolve” optimized floor plans based off different constraints, using a genetic method derived from existing neural network techniques. The results are, according to a white paper he put out, “biological in appearance, intriguing in character, and wildly irrational in practice.” The example he gives is based off an elementary school in Maine. Most schools are long corridors with classrooms coming off the sides, a highly linear design. By attempting to set different parameters, like minimizing traffic flow and material usage, or making the building easier to exit in the event of an emergency, the algorithms output different floor plans, developed on a genetic logic. But this optimization is done “without regard for convention [or] constructability,” and adding other characteristics, like maximizing windows for classrooms, led to complicated designs with numerous interior courtyards. For projects like schools, he suggests, class schedules and school layouts could be evolved side-by-side, creating a building optimized around traffic flow. While perhaps currently impractical (there’s no getting rid of architects—or rectangles— yet!), Simon hopes that the project will push people to think about how building with emergent technologies—like on-site 3D printing, CNC, self-assembling structures, and robotic construction—can be integrated within the design process. These technologies have promises for new forms that are hard to design for, he believes, and potentials that can’t be realized through existing design methods. As he told Dezeen: "Most current tools and thinking are stuck in a very two-dimensional world…[but,] designing arbitrary 3D forms optimized for multiple objectives—material usage, energy efficiency, acoustics—is simply past human cognitive ability."
Anyone who’s played The Sims (especially with cheat codes) knows the fun and ease of designing your own home with a few clicks of the mouse. Anyone who's designed an actual, IRL home knows that the real process is completely different. Homebuyers who want a custom home often encounter a frustratingly opaque and expensive process, or are stuck with pre-made plans that look like everyone else’s. They’re left, as Michael Bergin, cofounder and director of architecture at the startup Higharc put it, with “houses that are just left without design.” And even getting an architect to customize stock home plans, like those available online, Bergin said, can wind up costing at least in the low five figures, so instead, most go for pre-designed plans. “People spend their entire savings, everything that they have, on something that's not fit for them." Higharc believes there could be a “middle ground” in home architecture. To that end, it's developed a web-based home design app aimed at the everyday user and homebuyer. “We are trying to…address fundamental inefficiencies, structural challenges in the home building,” said Bergin. “The product that we are developing isn't going to replace an experienced 20-year architect,” he admitted, but it will, Higharc hopes, make customization much more accessible to a wider swath of new home buyers. Higharc is trying to embed “architectural intelligence” directly into its web-based software. The app uses, among other technologies, “procedural generation,” a computational technique borrowed from video games (one of Higharc’s founding members, Thomas Holt, has game industry experience), that generates graphics on the fly. “The difference between where this lands in gaming and our approach is that we're building in these heuristic or structural rules, so that no house that's produced in our system is structurally deficient,” explained Bergin. “[Higharc] looks at the international building code and prescriptive span tables and ensures that every house that we are producing is something that's buildable.” (A recent Curbed article reported that many of these code data come from the International Code Council, which recently sued the startup UpCodes for republishing building codes.) Higharc said that as it expands into new markets (it's currently beginning its first role out in the Chapel Hill, North Carolina, area), it is also incorporating regional building codes. To help with siting, Higharc pulls in public GIS data. Users can pick a plot anywhere in their area from a Google Maps–like interface and try out building their home. They can then take their design and see how it fits on another plot, and Higharc will adjust the home accordingly to make sure it fits just right on the new site. Right now, The Sims comparison might go a little too far—those 3D characters don’t have to worry too much about structural integrity, after all. Higharc allows users to choose from a series of options—preset aesthetics, number of bedrooms, guest suites, number of floors, the size of each room, etc.—and automatically generates a home optimized for the user selections and the chosen plot, immediately adjusting and restructuring the entire home as the homebuyer switches options. All the while, the software displays an estimated cost range that adapts with each change to help users stay on budget. “We’re making [home building] a fun process, making it an accessible process for everyone,” said Bergin. “Ultimately, we just want to make better neighborhoods and give home buyers and builders choice—and agency.”
Presented by University of Virginia School of Architecture and California College of the Arts / Digital Craft Lab Curated by Andrew Kudless and Adam Marcus Emerging technologies of design and production have opened up new ways to engage with traditional practices of architectural drawing. This exhibition, the second volume in a series organized by the CCA Digital Craft Lab, features experimental drawings by architects who explore the impact of new technologies on the relationship between code and drawing: how rules and constraints inform the ways we document, analyze, represent, and design the built environment. Participants: Benjamin Aranda & Chris Lasch; Bradley Cantrell & Emma Mendel; Sean Canty; Madeline Gannon; Howeler + Yoon; MARC FORNES / THEVERYMANY; Ibañez Kim; IwamotoScott Architecture; Stephanie Lin; V. Mitch McEwen; MILLIØNS (Zeina Koreitem & John May); Nicholas de Monchaux and Kathryn Moll; MOS (Michael Meredith & Hilary Sample); Catie Newell; Tsz Yan Ng; William O’Brien Jr.; Outpost Office; Heather Roberge; Jenny Sabin; SPORTS; John Szot; T+E+A+M; Nader Tehrani; Maria Yablonina Curator Talk and Panel Discussion: Monday March, 25 at 12:00pm, Campbell Hall 153
This past fall, artist Lee Simmons unveiled a massive 50-foot intervention in London’s Marylebone neighborhood, completed over a four-year collaboration with Bath, U.K.–based Format Engineers. Titled Quadrilinear, the project is an assemblage of five layers of laser-cut steel that climb four stories through a private clinic designed by ESA Architects. Simmons worked with the architects, engineers, and fabricators to help bring the sculpture, which was commissioned by Howard de Walden Estates, to fruition. The stainless-steel column is based on deconstructed maps of historic Marylebone abstracted and collaged together. The intent, according to Simmons, was to engage with the “context and rhythm and fabric of the facade,” but in such a way that the sculpture could “have a life outside of the architectural canvas” it was built within. The hope is that Quadrilinear might be more than just an architectural accent and that it will become a “gateway” to the historical road. For Simmons, the work is partially a reference to historic cornerstones that demarcate the built environment and introduce buildings and their histories. Format Engineers realized the technical aspects of Quadrilinear with the fabricators Littlehampton Welding. The airy sculpture is made of thin filigree steel sheets just under a quarter of an inch thick clamped together by 1,200 stainless-steel rods—the minimum that Format Engineers could reasonably use while maintaining structural integrity. By compressing the lattice sheets in this manner the structure mimics a Vierendeel truss with bolt tension counteracting the rotation of the joints. The whole free-standing structure has a slight curve that allows it to seem suspended almost weightlessly within the building’s frame despite its nearly 17-ton weight. Format Engineers relied on computational scripting to evaluate the most efficient ways of distributing stress and laying out the sculpture, and the bolts are, according to the firm, “clustered in a pattern reflecting a pure mechanical logic.” This approach minimized fabrication costs and simplified construction while maintaining the visual complexity of the piece. In the end, all of this engineering resulted in a structure that, in Simmons’s terms, evinces the “symbiotic way” that art and architecture have worked together in the built environment throughout history. https://vimeo.com/290294269
Emerging technologies of design and production have opened up new ways to engage with traditional practices of architectural drawing. This exhibition, the second volume in a series organized by the CCA Digital Craft Lab, features experimental drawings by architects who explore the impact of new technologies on the relationship between code and drawing: how rules and constraints inform the ways we document, analyze, represent, and design the built environment. Gallery Roundtable & Reception: Tuesday, January 29, 2019, 6:00pm with Dean Nader Tehrani, Curators Andrew Kudless and Adam Marcus, Sean Anderson, Michael Young, and contributors to the exhibition.
Located in Mexico City’s Museo Universitario Arte Contemporaneo, KnitCandela is a 13-foot-tall curved concrete shell formed with a 3-D-knitted framework. The sculptural project is a collaboration between Zaha Hadid Architects' Computation and Design Group (ZHCODE), ETH Zurich’s Block Research Group (BRG) led by Philippe Block and Tom Van Mele with PhD student Mariana Popescu, and Mexico’s Architecture Extrapolated who managed the on-site execution of the project. Named in homage to the concrete-bending designs of architect and structural engineer Félix Candela, the pavilion rests on three parabolic arches, with interior threadwork fashioned to resemble traditional garb found in the federal state of Jalisco, 340 miles northwest of the country’s capital. The pavilion is an outdoor feature of the museum's new exhibition, Design as Second Nature, featuring four decades of Zaha Hadid Architects' (ZHA) research into construction technology and design innovation. The project builds upon ETH Zurich's numerous recent forays into lightweight concrete structures based on curved geometries and digitally designed formwork. Currently, the university is leading KnitCrete, a partnership with the Swiss National Centre for Competence in Research in Digital Fabrication, to boost the technological expertise and production of hybrid and ultra-lightweight concrete structures. Past projects include an experimental concrete roof cast on 3-D printed sand formwork and an ultralight roof cap composed of a polymer textile and a network of steel cables. According to ETH Zurich, Block and Van Mele’s research group plugged a digitally generated pattern into an industrial knitting machine to produce the formwork. Over the course of 36 hours, the flat-bedded mechanism knitted over 200 miles of polyester yarn into four 3-D double-layered strips. To suspend the canopy, the upper layer of the textile bears a series of sleeves for the insertion of supporting cables. Additionally, the woven formwork integrated 1,000 inflatable modeling balloons that were transformed into waffle shell-like voids following the initial coating of concrete. The entire woven assembly, weighing a meager 55 pounds, was transported to the location via two suitcases stowed as normal checked baggage. Once onsite, the double-layered textile was tensioned between a steel-and-wood boundary frame and subjected to an initial millimeters-thick concrete coating. After hardening and the creation of a lightweight mold, the team poured five tons of fiber-reinforced concrete over the original 120-pound polyester-and-cable framework. The pavilion will remain in place until March 3, 2019.
ACADIA, or the Association for Computer Aided Design in Architecture, is set to meet in Mexico City at the Universidad Iberoamericana from October 18–20. Each year ACADIA brings together leading scholars, researchers, and practitioners who push the boundaries of architecture through design and computation. AN spoke with conference organizers Brian Slocum and Pablo Kobayashi, along with Technical Chair Phillip Anzalone, about the excitement of bringing the conference to Mexico for the first time. AN: Why is this year’s conference so special? This is the first time in ACADIA’s nearly 38 year history hosting the gathering in Mexico. The type of work that will be presented is something that hasn’t been seen locally and is not yet part of the culture of the institutions. Mexico, of course, has a rich tradition of craft, artisanal labor, and analog computation within architectural practices. We hope that by bringing ACADIA to Universidad Iberoamerica and UNAM that we can start a conversation for moving architecture forward. The theme of this year’s conference is Recalibration: On Imprecision and Infidelity. What do you mean by recalibration? The digital tools we use are very precise and by their very precision, there comes an obsessive need to control the output. In a certain sense, as a field we are facing a surplus of precision. We want to ask: Can error and imprecision (so-called glitches & failures) be seen as the creative act and be part of the dialogue? We have seen a shift in proposals and projects from those that place an emphasis on the tools of architectural design (robots, 3-D printers, BIM), which embody the precision and fidelity that the conference theme reacts to, toward those related disciplines and trajectories that break free from computational preconceptions and begin to encourage a redefinition of the traditional tools and processes that are at the heart of experimentation and production. Through technologies such as mixed reality and artificial intelligence, processes such as reuse and repurposing of materials, integration of computer and human interaction, and other trends, the current researchers inhabit a fluid zone where total control and the dichotomy of virtual and real is blurred, allowing for innovation and discovery to flourish. Also in terms of recalibrating the discourse, how do we deal with bigger, more social problems and evaluate the social impact of computation? How do you evaluate the results of an investigation that stems from a worldview rather than starting just from the data? How can we negotiate these social recalibrations without being too polemical? We started by speaking of truth and fidelity in computation output and arrived at this broader idea about recalibration. Our only hope ultimately is to shake things up a bit, shake up the discourse. AN: Can you speak more to how global (re)calibration works and how you define disciplines in increasingly co-located and overlapping fields of research? How does knowledge transfer work in an already connected world of research? The 2018 ACADIA conference is precisely (or perhaps I should say imprecisely) the forum needed for the pursuit of knowledge in a globalized environment. Simple digital connections via social media, publication, and direct communication are significantly enhanced through physical interactions, such as those that develop at a conference. The choice of a site and a theme that not only define boundaries and create parameters for discussion, but also engage a culture, an environment, and a sense of physicality, is critical to the work of combining the rigor of experimentation with the passion of discovery. The location and theme for this year’s conference is proposing not only a new way to look at research and practice in architecture but also exploring new places and ideas that have the potential to remake our environment. With an eye toward those locations, techniques, and ways of thinking that have been evolving and flourishing outside of the walls of digital environments, and embracing the difference between the visualized and the experienced, architectural design is discovering a new world of interaction that points toward to future of the built environment. AN: What are you most excited about this year's speaker lineup? I think we’ve hopefully found a good balance of speakers who challenge our own thinking on architecture and computation and continue to produce innovations in the field. Our keynotes range from global speakers such as Philippe Block, Patrik Schumacher, Francesca Hughes, to Mexico City-based practitioners Rafael Lozano-Hemmer and Diego Ricalde Equally, ACADIA’s award winners this year continue to push architectural research and education in new and interesting directions. ACADIA is proud to honor the work of Mónica Ponce de León, Jenny Wu and Dwayne Oyler Madeline Gannon, Sigrid Brell-Cokcan and Johannes Braumann, Areti Markopoulou, and all our paper session presenters. ACADIA kicks off next week with workshops held at UNAM from October 15–17. The conference sessions and keynotes run October 18–20 at Universidad Iberoamericana. Visit 2018.acadia.org for more information.
Researchers at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, are giving timber construction a mechanical leg up with the introduction of prefabricated, robotically-assembled timber frame housing. Together with Erne AG Holzbau, a contracting firm that specializes in timber, researchers at the institute’s Chair of Architecture and Digital Fabrication have developed Spatial Timber Assemblies, a system for digitally fabricating and constructing complex forms from timber. After a model of the structure has been laid out, robotic arms mounted in the ceiling of the assembly chamber are capable of building the required parts as well as putting them together. First, one arm picks up a beam and holds it while a human trims the piece into the proper size and shape. Then, a second robot arm pre-drills the holes needed for attaching the beam to the structure; finally, both robot arms work together to precisely place the beam as a human attaches it. Thanks to algorithms developed by the researchers, the arms are able to constantly recalculate their location in space and how to move forward without bumping into each other (or humans on the job site). A major advantage of Spatial Timber Assemblies is that the structures built this way carry their load-bearing capacity structurally, and don’t require reinforcing plates or any additional steel. If the overall design changes during construction, researchers are able to calculate a new, optimized framing solution using load-distribution algorithms. The system is more than theoretical. ETH researchers are currently using it to assemble six unique modules, which will join to frame the top two floors of the experimental DFAB HOUSE in Dübendorf, a suburb of Zurich. Once installed on site, both floors will have distinct rooms across 328 square feet of floor space. The final design, which uses 487 individual beams, will be wrapped in a clear plastic facade so that the underlying timber structure can remain exposed. Advancements in robotic construction are advancing rapidly, and ETH researchers have been developing robots that weld, spray concrete, and stack bricks to create forms that would have been difficult to build previously. And if the ETH needs help decorating the interior of their research house, robots can now assemble IKEA furniture, too.
The buildings of the future—if the team at Gramazio Kohler Research (GKR) has its way—will be built by robots. Not just one type of robot but many different kinds, each programmed to perform a different type of work, with a different type of material, and as a result, generate a different type of structure. The researchers—led by professors Fabio Gramazio and Matthias Kohler of ETH Zurich—are moved, according to the lab’s mission statement, to “examine the changes in architectural production requirements that result from introducing digital manufacturing techniques.” This research-and-development effort focuses on anticipating and ultimately generating the construction processes of our robot-filled future through interdisciplinary collaboration. GKR’s experiments are part of an effort by the so-called ETH Domain—a research network of universities including ETH Zurich and other independent research institutions based in Switzerland—to prototype and develop new technologies using a research-centered approach. The research lab’s recent efforts have been put toward developing the so-called DFAB house, a project undertaken by eight ETH Zurich research professors that aims to construct the first-ever digitally planned, designed, and constructed structure. The project will test several of GKR’s research endeavors at full scale, in concert with the other teams’ research, and is expected to be completed in 2018. Jammed Architectural Structures Rock Print is a robotically constructed architectural installation built from “low-grade granular material,” a focus of the lab’s research into jammed architectural structures erected in nonstandard shapes. The initiative focuses on the robotic aggregation of small rocks that are “quite literally crammed together in such a way that the mass holds its form and shape like a solid,” according to the project website. To produce the installation, a robotic arm drizzles an adhesive polymer thread over alternating layers of rocks that ultimately become structurally sound. The bulbous column that results can be deconstructed by pulling the thread away so that its constituent components can be reused. The technique was shown off at the 2015 Chicago Architecture Biennial as a dynamic architectural installation in partnership with the Self-Assembly Lab at the Massachusetts Institute of Technology. Complex Timber Structures The team has also worked with wood construction techniques in an effort to not only cut down on wood waste but also find useful applications for Switzerland’s abundant softwood resources. The Complex Timber Structures experiment grafts together precisely cut lengths of wood using a variety of joinery techniques—including glue-impregnation—to create tessellated, geometric forms. The three-dimensional truss structures link together to create comparatively strong arrangements that are also lightweight in nature. The project was developed as part of the SNSF National Research Programme in collaboration with the Bern University of Applied Sciences Architecture, Wood and Civil Engineering. Mesh Mold Metal In conjunction with the Agile & Dexterous Robotics Lab of Professor Jonas Buchlihas, the research team has also tackled automated construction of doubly curved reinforced concrete walls with its Mesh Mold Metal project. The technique utilizes a robotic arm to splice and spot-weld quarter-inch-thick gridded rebar segments into place to create a rigid cage that can then be filled with concrete. The robot’s human assistant loads the rebar into the robot’s capable arms and applies the concrete by hand while the machine stipples the bits of metal together. The resulting S-shaped wall is finished with shotcrete for a smooth surface. On-Site Robotic Construction Rather than crafting meticulously curved walls, the On-Site Robotic Construction technique attempts to automate “nonstandard construction tasks” like stacking bricks in uneven arrangements. Researchers devised a robotic arm that utilizes a collection of cameras to examine and manipulate nonstandard arrangements of objects that are then moved into new configurations. The “adaptive building” technique was developed as part of Switzerland’s National Competence Centre of Research Digital Fabrication initiative.
This is the fourth column of “Practice Values,” a bi-monthly series by architect and technologist Phil Bernstein. The column focuses on the evolving role of the architect at the intersection of design and construction, including subjects such as alternative delivery systems and value generation. Bernstein was formerly vice president at Autodesk and now teaches at the Yale School of Architecture. In my last column I explored the potential impacts of next-generation technology—particularly machine intelligence (also known as artificial intelligence or AI) and crowd-sourced knowledge—on the hegemony of professionalism for architects. This question was recently explored further by Daniel Susskind, one of the authors of an Oxford study published in a RIBA journal article entitled “The Way We’ll Work Tomorrow”—which suggested that modern knowledge work, like much of that performed by architects today, should be considered not so much as “by job” as “by task,” and that many of those tasks are likely to be automated in the future. Professions exist to systematize expertise and, by extension, control access to it. Computation democratizes access to that expertise by digitizing and distributing it, but does this lead to an inevitable decline for the need for professionals themselves? Like manufacturing workers in the 20th century, knowledge workers are likely to be “de-skilled” in the 21st, as routine, transactional, and analytical tasks are performed by machine-learning algorithms referencing big data sources, and the need for human abilities for those same chores is eliminated. Just as CAD rendered my once-fearsome hand-drafting skills mostly irrelevant, expert systems may do the same with today’s expertise in, say, cost estimating or construction documentation. Even though architectural design writ large is a profoundly creative act, the more prosaic components—preparing schedules, measuring and calculating, even evaluating performance characteristics like safety or zoning conformance—comprise a sizable portion of the architect’s fee. Production tasks connected to technical documentation alone (think CD phase work) can be as much as 40 percent of compensation on a project. Once this stuff gets automated, will there be much less work, and will we need far fewer architects? Perhaps—unless we find alternate strategies for demonstrating the value of our efforts. Oxford’s Susskind suggests that while the “job of an architect” may be profoundly transformed with technology, the profession should reconsider some of our critical tasks in response. If design processes will inevitably be augmented by computation, we might control our destiny by taking on the problem of creating the resulting computational platforms: engineering knowledge systems and structures, developing workflow protocols for analysis and evaluation, and designing new systems from which design itself can spring. In some sense, this is meta-design—not unlike the work we’ve seen since the advent of BIM that required technology-implementation plans, data standards, and integrated multidisciplinary information flows. Cutting-edge design firms rely heavily on scripts and so-called “generative design” techniques, and what Susskind recommends here is a logical extension of that strategy that augments (rather than replaces) the capabilities of designers. Of course, the same technologies that might appear to be threats to our autonomy as architects could, jujitsu-style, be turned into opportunities. Susskind suggests that automation offers the immediate benefit of making routine activities more efficient; perhaps repurposing those newly found hours means more time to improve design. He further recommends that our knowledge and influence could be magnified via consortia of digitally connected professionals, what he calls “communities of expertise” where the sum is far greater than the individual parts. Author and Harvard architecture professor Peter Rowe once described the design process as dependent upon heuristic reasoning, since all design challenges are complex and somewhat open-ended with ambiguous definitions and indeterminate endpoints, borrowing from sociologist Horst Rittel who characterized these as “wicked problems.” Computers themselves aren’t, at least today, particularly good at heuristics or solving wicked problems, but they are increasingly capable of attacking the “tame” ones, especially those that require the management of complex, interconnected quantitative variables like sustainable performance, construction logistics, and cost estimations. And since clients have a strong interest in seeing those things done well, why not lean into the chance to complement heuristics with some help with the tame, and leverage the resulting value as a result? That architects are so well-suited to the challenges of the wicked problem bodes well for us in the so-called "Second Machine Age," when machines don’t just do things we program them to do, but can learn how to do new things themselves. The essential value of architects as professionals who can understand and evaluate a problem and synthesize unique and insightful solutions will likely remain unchallenged by our computer counterparts in the near future, an argument supported by a 2013 study of job computerization (again, at Oxford) that suggested that “occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two.” Rather than rely upon this vaguely comforting conclusion, our profession must embrace and attack the wicked problem of the future of architecture and computational design and control the future of our profession accordingly. We’ll face far more opportunities than threats from computation if we can.
The beginnings of digital drafting and computational design will be on display at the Museum of Modern Art (MoMA) starting November 13th, as the museum presents Thinking Machines: Art and Design in the Computer Age, 1959–1989. Spanning 30 years of works by artists, photographers, and architects, Thinking Machines captures the postwar period of reconciliation between traditional techniques and the advent of the computer age. Organized by Sean Anderson, associate curator in the museum's Department of Architecture and Design, and Giampaolo Bianconi, a curatorial assistant in the Department of Media and Performance Art, the exhibition examines how computer-aided design became permanently entangled with art, industrial design, and space planning. Drawings, sketches, and models from Cedric Price’s 1978-80 Generator Project, the never-built “first intelligent building project” will also be shown. The response to a prompt put out by the Gilman Paper Corporation for its White Oak, Florida, site to house theater and dance performances alongside travelling artists, Price’s Generator proposal sought to stimulate innovation by constantly shifting arrangements. Ceding control of the floor plan to a master computer program and crane system, a series of 13-by-13-foot rooms would have been continuously rearranged according to the users’ needs. Only constrained by a general set of Price’s design guidelines, Generator’s program would even have been capable of rearranging rooms on its own if it felt the layout hadn’t been changed frequently enough. Raising important questions about the interaction between a space and its occupants, Generator House laid the groundwork for computational architecture and smart building systems. Exploring the rise of rise of the plotter and production of computer-generated images, Thinking Machines provides a valuable look into the transition between hand drawn imagery and today’s modern suite of design tools. The sinuous works of Zaha Hadid and other architects who rely on computational design to make their projects a reality all owe a debt to the artists on display at Thinking Machines. Thinking Machines: Art and Design in the Computer Age, 1959–1989 will be running from November 13th to April 8th, 2018. MoMA members can preview the show from November 10th through the 12th.