Posts tagged with "Computational Design":

Placeholder Alt Text

Gramazio Kohler Research wants to build the future using robots

The buildings of the future—if the team at Gramazio Kohler Research (GKR) has its way—will be built by robots. Not just one type of robot but many different kinds, each programmed to perform a different type of work, with a different type of material, and as a result, generate a different type of structure. The researchers—led by professors Fabio Gramazio and Matthias Kohler of ETH Zurich—are moved, according to the lab’s mission statement, to “examine the changes in architectural production requirements that result from introducing digital manufacturing techniques.” This research-and-development effort focuses on anticipating and ultimately generating the construction processes of our robot-filled future through interdisciplinary collaboration. GKR’s experiments are part of an effort by the so-called ETH Domain—a research network of universities including ETH Zurich and other independent research institutions based in Switzerland—to prototype and develop new technologies using a research-centered approach. The research lab’s recent efforts have been put toward developing the so-called DFAB house, a project undertaken by eight ETH Zurich research professors that aims to construct the first-ever digitally planned, designed, and constructed structure. The project will test several of GKR’s research endeavors at full scale, in concert with the other teams’ research, and is expected to be completed in 2018. Jammed Architectural Structures Rock Print is a robotically constructed architectural installation built from “low-grade granular material,” a focus of the lab’s research into jammed architectural structures erected in nonstandard shapes. The initiative focuses on the robotic aggregation of small rocks that are “quite literally crammed together in such a way that the mass holds its form and shape like a solid,” according to the project website. To produce the installation, a robotic arm drizzles an adhesive polymer thread over alternating layers of rocks that ultimately become structurally sound. The bulbous column that results can be deconstructed by pulling the thread away so that its constituent components can be reused. The technique was shown off at the 2015 Chicago Architecture Biennial as a dynamic architectural installation in partnership with the Self-Assembly Lab at the Massachusetts Institute of Technology. Complex Timber Structures The team has also worked with wood construction techniques in an effort to not only cut down on wood waste but also find useful applications for Switzerland’s abundant softwood resources. The Complex Timber Structures experiment grafts together precisely cut lengths of wood using a variety of joinery techniques—including glue-impregnation—to create tessellated, geometric forms. The three-dimensional truss structures link together to create comparatively strong arrangements that are also lightweight in nature. The project was developed as part of the SNSF National Research Programme in collaboration with the Bern University of Applied Sciences Architecture, Wood and Civil Engineering. Mesh Mold Metal In conjunction with the Agile & Dexterous Robotics Lab of Professor Jonas Buchlihas, the research team has also tackled automated construction of doubly curved reinforced concrete walls with its Mesh Mold Metal project. The technique utilizes a robotic arm to splice and spot-weld quarter-inch-thick gridded rebar segments into place to create a rigid cage that can then be filled with concrete. The robot’s human assistant loads the rebar into the robot’s capable arms and applies the concrete by hand while the machine stipples the bits of metal together. The resulting S-shaped wall is finished with shotcrete for a smooth surface. On-Site Robotic Construction Rather than crafting meticulously curved walls, the On-Site Robotic Construction technique attempts to automate “nonstandard construction tasks” like stacking bricks in uneven arrangements. Researchers devised a robotic arm that utilizes a collection of cameras to examine and manipulate nonstandard arrangements of objects that are then moved into new configurations. The “adaptive building” technique was developed as part of Switzerland’s National Competence Centre of Research Digital Fabrication initiative.
Placeholder Alt Text

How can architects adapt to the coming age of AI?

This is the fourth column of “Practice Values,” a bi-monthly series by architect and technologist Phil Bernstein. The column focuses on the evolving role of the architect at the intersection of design and construction, including subjects such as alternative delivery systems and value generation. Bernstein was formerly vice president at Autodesk and now teaches at the Yale School of Architecture. In my last column I explored the potential impacts of next-generation technology—particularly machine intelligence (also known as artificial intelligence or AI) and crowd-sourced knowledge—on the hegemony of professionalism for architects. This question was recently explored further by Daniel Susskind, one of the authors of an Oxford study published in a RIBA journal article entitled “The Way We’ll Work Tomorrow”—which suggested that modern knowledge work, like much of that performed by architects today, should be considered not so much as “by job” as “by task,” and that many of those tasks are likely to be automated in the future. Professions exist to systematize expertise and, by extension, control access to it. Computation democratizes access to that expertise by digitizing and distributing it, but does this lead to an inevitable decline for the need for professionals themselves? Like manufacturing workers in the 20th century, knowledge workers are likely to be “de-skilled” in the 21st, as routine, transactional, and analytical tasks are performed by machine-learning algorithms referencing big data sources, and the need for human abilities for those same chores is eliminated. Just as CAD rendered my once-fearsome hand-drafting skills mostly irrelevant, expert systems may do the same with today’s expertise in, say, cost estimating or construction documentation. Even though architectural design writ large is a profoundly creative act, the more prosaic components—preparing schedules, measuring and calculating, even evaluating performance characteristics like safety or zoning conformance—comprise a sizable portion of the architect’s fee. Production tasks connected to technical documentation alone (think CD phase work) can be as much as 40 percent of compensation on a project. Once this stuff gets automated, will there be much less work, and will we need far fewer architects? Perhaps—unless we find alternate strategies for demonstrating the value of our efforts. Oxford’s Susskind suggests that while the “job of an architect” may be profoundly transformed with technology, the profession should reconsider some of our critical tasks in response. If design processes will inevitably be augmented by computation, we might control our destiny by taking on the problem of creating the resulting computational platforms: engineering knowledge systems and structures, developing workflow protocols for analysis and evaluation, and designing new systems from which design itself can spring. In some sense, this is meta-design—not unlike the work we’ve seen since the advent of BIM that required technology-implementation plans, data standards, and integrated multidisciplinary information flows. Cutting-edge design firms rely heavily on scripts and so-called “generative design” techniques, and what Susskind recommends here is a logical extension of that strategy that augments (rather than replaces) the capabilities of designers. Of course, the same technologies that might appear to be threats to our autonomy as architects could, jujitsu-style, be turned into opportunities. Susskind suggests that automation offers the immediate benefit of making routine activities more efficient; perhaps repurposing those newly found hours means more time to improve design. He further recommends that our knowledge and influence could be magnified via consortia of digitally connected professionals, what he calls “communities of expertise” where the sum is far greater than the individual parts. Author and Harvard architecture professor Peter Rowe once described the design process as dependent upon heuristic reasoning, since all design challenges are complex and somewhat open-ended with ambiguous definitions and indeterminate endpoints, borrowing from sociologist Horst Rittel who characterized these as “wicked problems.” Computers themselves aren’t, at least today, particularly good at heuristics or solving wicked problems, but they are increasingly capable of attacking the “tame” ones, especially those that require the management of complex, interconnected quantitative variables like sustainable performance, construction logistics, and cost estimations. And since clients have a strong interest in seeing those things done well, why not lean into the chance to complement heuristics with some help with the tame, and leverage the resulting value as a result? That architects are so well-suited to the challenges of the wicked problem bodes well for us in the so-called "Second Machine Age," when machines don’t just do things we program them to do, but can learn how to do new things themselves. The essential value of architects as professionals who can understand and evaluate a problem and synthesize unique and insightful solutions will likely remain unchallenged by our computer counterparts in the near future, an argument supported by a 2013 study of job computerization (again, at Oxford) that suggested that “occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two.” Rather than rely upon this vaguely comforting conclusion, our profession must embrace and attack the wicked problem of the future of architecture and computational design and control the future of our profession accordingly. We’ll face far more opportunities than threats from computation if we can.
Placeholder Alt Text

New MoMA show plots the impact of computers on architecture and design

The beginnings of digital drafting and computational design will be on display at the Museum of Modern Art (MoMA) starting November 13th, as the museum presents Thinking Machines: Art and Design in the Computer Age, 1959–1989. Spanning 30 years of works by artists, photographers, and architects, Thinking Machines captures the postwar period of reconciliation between traditional techniques and the advent of the computer age. Organized by Sean Anderson, associate curator in the museum's Department of Architecture and Design, and Giampaolo Bianconi, a curatorial assistant in the Department of Media and Performance Art, the exhibition examines how computer-aided design became permanently entangled with art, industrial design, and space planning. Drawings, sketches, and models from Cedric Price’s 1978-80 Generator Project, the never-built “first intelligent building project” will also be shown. The response to a prompt put out by the Gilman Paper Corporation for its White Oak, Florida, site to house theater and dance performances alongside travelling artists, Price’s Generator proposal sought to stimulate innovation by constantly shifting arrangements. Ceding control of the floor plan to a master computer program and crane system, a series of 13-by-13-foot rooms would have been continuously rearranged according to the users’ needs. Only constrained by a general set of Price’s design guidelines, Generator’s program would even have been capable of rearranging rooms on its own if it felt the layout hadn’t been changed frequently enough. Raising important questions about the interaction between a space and its occupants, Generator House laid the groundwork for computational architecture and smart building systems. Exploring the rise of rise of the plotter and production of computer-generated images, Thinking Machines provides a valuable look into the transition between hand drawn imagery and today’s modern suite of design tools. The sinuous works of Zaha Hadid and other architects who rely on computational design to make their projects a reality all owe a debt to the artists on display at Thinking Machines. Thinking Machines: Art and Design in the Computer Age, 1959–1989 will be running from November 13th to April 8th, 2018. MoMA members can preview the show from November 10th through the 12th.
Placeholder Alt Text

Primitive hut installation by OMG! decomposes in upstate New York

One might say that the Primitive Hut pavilion by OMG! is both high-tech and traditional, or even temporary and permanent. A mix of new and old techniques and technologies, the project is designed to last indefinitely while at the same time decompose away. OMG! is a collaborative between Martin Miller of Antistatics and Caroline O’Donnell of CODA, and their latest installation, Primitive Hut, is situated in the OMI International Arts Center in Ghent, New York. The two set out to produce a work that would question the relationship of architecture to time through an exploration of growth and decay. To do so, they engaged digital fabrication techniques as well as a structural system first proposed by NASA engineer Kenneth Cheung. Utilizing what are called digital cellular solids, the team constructed the installation out of a lattice of roughly though accurately cut interlocking plywood modules. Besides the plywood, the only other materials employed were sawdust (waste from cutting the plywood), bio-resin, hemp, and an infill of manure cylinders. One other component that balances the line between material and site are a set of four maple trees which grow through the structure. “The planting of the four trees more than offsets the wood used in the pavilion. As the pavilion decomposes, the trees will be nourished and will eventually lift the roof structure up in its branches. As in the original etching, the project is about opening up our understanding of architecture towards a better interaction with nature,” O’Donnell told AN. Referencing the famous illustration from Marc-Antoine Laugier’s text "Essai sur l'architecture" from 1755, O’Donnell explained the formal and material considerations as they relate to temporal aspects of the project. “This text speculates on the primitive human’s first house as one which harnessed the potentialities of the environment and blurred the lines between nature and architecture. While this well-known image shows the iconic form of the house formed by the tree branches, the house is not yet formed and implies both a future and a past state of growth and decay.” The 5,000 individual pieces that make up the structure were cut in such a way to optimize the fabrication process. “In the early days, computational design was often exuberant for the sake of exuberance and the image of blobs and folds became synonymous with digital architecture,” Miller said in reference to the changing attitudes surrounding digital and parametric design. Rather than a pure formal or computation exploration, OMG!’s work is interested in leveraging the possibility of efficiencies in the process and the ability to engage with the material in a very precise craftsperson-oriented level of detail. As such, the process of producing the individual pieces led to the specific pattern and texture born of the rationalized cutting process. “The ribbed effect produced actually reads as an exuberant detail, but is born out of the efficiency of the fabrication.” The pavilion opened to the public on October 21, and will remain on the site in some form until the trees that are growing through it die. So, at least 200 years.
Placeholder Alt Text

Architects must redesign their profession before technology does

This is the third column of “Practice Values,” a bi-monthly series by architect and technologist Phil Bernstein. The column focuses on the evolving role of the architect at the intersection of design and construction, including subjects such as alternative delivery systems and value generation. Bernstein was formerly vice president at Autodesk and now teaches at the Yale School of Architecture.

Disabling (Professional) Expertise

In 1977, social critic Ivan Illich argued that the mid-20th century should be named “The Age of Disabling Professions,” asking whether “if this age, when needs were shaped by professional design, will be remembered with a smile or with a curse.” Illich’s skepticism about the importance and role of doctors, lawyers, and architects was an inflection point in the ascendance of the professional class that began with the industrialization of America. What followed for architects—who, at just about the same time as Illich’s query, were subjected to the emergence of alternative forms of project delivery (like design-build), new incumbents treading on our turf (like construction managers), and influence from extrinsic forces (like lawyers and insurance companies)—was several decades of existential angst with which we are all familiar.

Forty years later, there are more architects, and more work for us, than ever—yet the existential angst remains: If recessions, construction managers, and liability insurance underwriters didn’t manage to dismantle the profession, now what? Answering that question comes the Oxford duo of Richard and Daniel Susskind and their 2015 tome The Future of the Professions, an exhaustive examination of how the broad influences of digital technology may be the end-of-times challenge to the professional class so desired by Illich. The Susskinds argue that it will not be a loss of faith in architects, lawyers, and accountants, but rather the broad democratization of expertise through big data and data sharing, expert systems, and automation that will “transform the work of human experts.” As knowledge work begins the same transfiguration in the world of computation that manufacturing experienced with machine automation, the bespoke relationships curated by architects with clients will be circumvented by widely accessible knowledge systems, architects will no longer be the anointed “gatekeepers” of professional knowledge or judgment, and the increasing complexity of building problems will face economic pressures demanding that architects provide even more service for less money. Large swaths of professional services will be routinized by computers, further decomposing those services into discrete automated tasks. New systems of design and construction delivery will reconstitute from traditional professional scopes disintermediated by algorithms and big data.

But if the essential value of architects is our ability to design—see the world creatively, synthesize disparate information, generate new and innovative ideas—aren’t we safe from this digital onslaught? Not so fast, according to the Susskinds, who ask, “To what problem is judgment the solution?” They cite the 60 million disputes on eBay resolved with automated mediation (and no lawyers), medical advice dispensed by WebMD on smart phones around the world, or the online tax-preparation software used by millions of taxpayers each year; many of these folks would have never dreamt of hiring a lawyer or an accountant. And this is the core of their argument: Technology will democratize expertise, making it available to many more recipients than could ever by curated by 1:1 professional relationships.

Since society created the professional class to codify and distribute professional expertise, shouldn’t this trend to democratization be embraced? And since architects design a small percentage of the built environment, isn’t this trend, in theory, all for the good? Should architects cede our authority to algorithms, it’s likely we’ll lose all control and influence over the forces that often reduce great design aspirations to mediocre results. It is difficult to argue, however, that the changes that automation and the resulting process innovation that the Susskinds predict will put great pressure on the role of our profession while simultaneously eliminating the need for broad swaths of production work like working drawings.

How to respond? As far back as Illich’s original provocation, architects have decried our diminishing influence while embracing new technologies and their opportunities with at best mild enthusiasm and at worst outright hostility. This wave of automation-innovation will be much more profound than CAD or even BIM. Perhaps it offers a chance to deeply examine the value proposition of architecture and architects, and, using our skills, to design our roles in the future supported and accelerated by new technology rather than, once again, threatened by it.

Placeholder Alt Text

ACADIA 2016 showcased the diversity of cutting-edge computational design

This year’s meeting of the Association for Computer-Aided Design in Architecture (ACADIA) was hosted at the University of Michigan Taubman College of Architecture and Urban Planning. It was the 36th meeting of ACADIA, and was regarded to be an incredibly successful showing. The theme of the conference, Posthuman Frontiers: Data, Designers, and Cognitive Machines, was paired with the Posthuman Frontiers exhibition, featuring jury-selected projects submitted to the conference, as well as the advanced work of Taubman College faculty. The events of the conference were held at multiple venues around Ann Arbor, and were preceded by several workshops that made use of Taubman College’s digital fabrication and instruction facilities.

For those of us on the outside looking in (in our lesser moments, perhaps), the ACADIA community might easily be misconstrued as a group of architects obsessed with robots, or possessing an interest in complicated shapes made in Grasshopper for their own sake. However, the three days this author spent among their ranks at this year’s conference were some of the most inspiring in recent memory. Yes, there were moments of geometric fetishism, and yes, there were a considerable number of time-lapse videos of robot arms in progress. But when taken in aggregate, these projects, papers, and talks reframed and made vibrant the essential ingredients of what we work on as architects: the arrangement of solid and void, the cultural effects of form, and the possibilities of what we might craft in the built environment.

It must be said that the range of work presented was dramatic. Even within the more immediately applicable papers and projects were sober arguments for parametric design in space planning, a smart device for lowering cooling costs in office spaces, newly designed plugins to optimize the unfolding of 3-D meshes, and progress-in-training robots to lay tile in order to relieve the strain on human bodies.

Caress of the Gaze from Pier 9 on Vimeo.

Reaching into more radical territory, we saw prototyped near-body architectures operating on the politics of the posthuman in Behnaz Farahi’s “Caress of the Gaze,” an actuated garment which tracks—and responds to—the eye movement of those regarding the wearer. We saw installations that build intimacy and a sense of cooperative play between humans and digital entities. There was work which adopted uncommon material alliances of “programmable matter,” such as in Jane Scott’s intertwining of hydrophobic fibers that writhe and retract when exposed to water vapor (one of several fabric-oriented works), and too many others of note to mention them all.

But some of the most memorable moments from this conference were the keynote addresses, as they punctuated the proceedings with disparate tones and positions that illuminated the diversity of this community. Theodore Spyropoulos led the charge on Thursday with a talk entitled All Is Behavior (a play on Hans Hollein’s claim that “All are architects. Everything is architecture.”) It quickly became clear that Spyropoulos sees the future of cities, and indeed, that of humanity, in a technologically positivist light. He envisions self-organizing and aggregating structures which allow for adaptivity in the face of changing climatic or social conditions, and seeks to bring us into more sympathetic forms of interaction with robotic and digital entities.

The evening of the same day found the participants exposed to other visionary work, in a dreamy—and at times titillating—conversation between Philip Beesley and Iris Van Herpen, whose ongoing collaborations are advancing both Van Herpen’s work at the forefront of couture, and Beesley’s at, perhaps, the architectural equivalent. Lucidly expressive, Beesley’s tone was one of wonderment—of proposed, barely imaginable relationships between humans and matter. In fact, Beesley’s role is most easily understood, and his work is most easily appreciated, when it is placed in the context of couture, the goal of which is to push the bounds of what is possible in clothing.

Mario Carpo’s discussion of the cultural implications of searchability was a thoughtful meditation and provocation that ultimately concluded the conference Saturday evening, but the real climax of ACADIA 2016 was a keynote lecture Friday evening by Elizabeth Diller, as she was presented with the Lifetime Achievement Award. Despite a playful hesitance to engage with the foreboding finality of “Lifetime Achievement,” Diller generously outlined some of the more seminal works of Diller Scofidio + Renfro (DS+R), one of the most influential practices in the world over the past 25 years. Early in the talk, Diller emphasized her interest in the fields adjacent to architecture, a propensity for smaller scale works, and a persistent fascination with “the encounter.” By the end, however, she was in a mode of pure architectural shoptalk, sharing in-progress photos of the recently manufactured steel struts and enormous wheels that will comprise The Shed, currently in construction in New York’s Hudson Yards development. Diller concluded her remarks with some reflections upon the way culture has shifted since some of DS+R’s early work. In the present day, she claims:

“...the speed of obsolescence makes technology a liability. Dumber is better than smarter and the best thing to do for culture in the future is to secure real estate. It’s as basic as that.

Then: Systems theory, game theory, cybernetic control systems were tools to democratize culture.

Now: Digital technologies allow culture to be open source, dispersed, and on-demand. However, with democracy comes the ubiquitous condition of being monitored, so it’s a different time.…

Then: Kit of parts and kinetic systems produce flexibility.

Now: Flexibility is a paradox. The more flexibility is built in, the more predetermined, leaving nothing but empty space (this is related to ‘dumb is a virtue’).

Then: Disciplinary borders had to be broken.

Now: Despite academia’s parsing and classification, the richly indeterminate contours of interdisciplinarity, intradisciplinarity, multidisciplinarity, transdisciplinarity, cross-disciplinarity—we actually have to push to make these things happen, because somehow, the real world divides everything up again. Because that’s where money comes from—different places. And it’s going to take a long time to change the system.

Then: Government support for culture was assumed.

Now: To avoid the vicissitudes of the economy, the cultural institutions must produce their own financial security.

Then: The architect was a generalist that gathers research from subcommittees.

Now: Professionalization turns the architect into a director/producer that relies on a rolling cadre of subconsultants who bring an ever-widening depth of expertise to ever-more adventurous problems. So, then and now, the architect gets to push the agency of the profession to invent a cultural and civic project on both scores.”

These sage thoughts carried the conference into its final day, which held perhaps the most poignant moment of the proceedings, as Chuck Eastman, one of the original founders of ACADIA in 1981, received the Society Award of Excellence. Hearing Eastman describe the early days of computational design, the work that went into tasks as simple as Boolean operations, put the tools we now take for granted in perspective. It is amazing how far computational design has advanced in just a few decades, and this community shows no sign of slowing. No doubt, the Massachusetts Institute of Technology’s Media Lab will rise to the occasion and show us the next chapter a year from now, as they are slated to host ACADIA 2017.

Placeholder Alt Text

Profile> John D. Cerone & Hashim Sulieman of SHoP Construction

On Feburary 17, John D. Cerone and Hashim Sulieman of SHoP Construction will lead Computational Design & 4D Sequencing, a workshop focusing on parametric modeling as part of DAY 2 of COLLABORATION, a conference on facades and fabrication sponsored by The Architect's Newspaper. John is a Virtual Design & Construction Coordinator and a member of the Advanced Technology Group at SHoP Construction; specializing in Building Information Modeling (BIM), he has helped SHoP develop its technology and process, and served as an Adjunct Professor at the Parsons New School for Design teaching BIM and digital representation. John received his Bachelor of Arts degree from the School of Architecture at Miami University in Oxford, Ohio (2002), and his Master of Architecture degree from the Graduate School of Architecture, Planning and Preservation at Columbia University (2008). Hashim is a Virtual Design and Construction Manager at SHoP Construction and a member of the Advanced Technology Group. His work at SHoP has focused on implementation of parametric models, BIM, and direct-to-fabrication technology. Hashim has worked at SOM as a Digital Design Specialist and as an Adjunct Assistant Professor at Columbia University’s Graduate School of Architecture, Planning, and Preservation’s C-BIP project. SHoP Construction is behind the under-construction Barclay’s Center and Atlantic Yards development site in Brooklyn. The stadium is clad in an undulating steel and glass enclosure made up of 12,000 unique steel latticework panels; to facilitate installation, the firm developed a 4D construction sequencing model of the structure and facade that allows the project team to make informed decisions in real-time as the panels are installed. The first session of their COLLABORATION workshop will focus on parametric modeling that allows design variability and tests the limits of form, and the second session will be a step-by-step guide to 4D construction sequence modeling. Software used will include Catia/Digital Project, Rhinoceros, Navisworks® Manage, Microsoft project, and Microsoft Excel. Register here.