Posts tagged with "Buildings Embedded With Sensors":

Placeholder Alt Text

Upali Nanda uses neuroscience to understand buildings as living organisms

Doctor Upali Nanda is reimagining the role of the architect. Where design today is often top-down and architects move on to new projects before doors of the project open, Nanda believes the role of architecture is to create living systems that respond to inhabitants’ changing needs, and architects have to stay involved during occupancy to have true agency. Nanda heads up cutting-edge projects as principal and director of research at HKS Architects, and also serves as the executive director for the nonprofit Center for Advanced Design Research and Education, and teaches as an associate professor of practice at the Taubman School of Architecture and Urban Planning at University of Michigan. Perhaps most tellingly, she serves on the board of directors of the Academy of Neuroscience for Architecture. Working to create measurable outcomes in buildings that respond adaptably to human cognition and perception, Nanda will be discussing the intersection of data, neuroscience, and IRL space at the upcoming TECH+ conference in Los Angeles. AN: In many ways, you’re an outlier in architecture as a researcher who really thinks about human perception and cognition. How did you wind up approaching architecture this way? Nanda: My entire career pathway can be summed up as someone who links design to outcomes, trying to understand the difference we make through design, and how human perception and human cognition play into it. My doctoral work was on how senses interact with each other to craft our perceived experience. By diving deep into neuroscience I stumbled upon insights that could give me agency as an architect. It allowed me to ask how form could impact emotions, or how art could impact health, or how views could impact achievement. It also changed my thinking to architecture not as a building, but an immersive interface between humans and the environment.

We create interfaces, and that perhaps gives us a certain overlap with the tech industry. The difference is that humans are immersed within the interfaces we create. We constantly work at a different scales—from macro to micro and back again. Unfortunately, all too often with these immersive interfaces that we create, we never truly engage with what it means to the people who live in it, because our engagement with the project has ended long before people move in.

How does this impact how you see the lifespan of the design process and the architect’s role in it? A project doesn't end when the doors open. That's when it begins. So a project really starts living when the doors open, when people are in it. Until that point, all of your design is a hypothesis. And until we test that hypothesis, not just in simulations, but in lived reality, we have no proof that it works. What's fascinating is that we cannot test this performance once and think its enough. We are living organisms. And what we create, once occupied, is a living organism too. Architects evolve, the building evolves, and occupants evolve. Not seeing architecture as an evolving, living organism has always been problematic. How has that shifted with the increase in sensing technology and the ability to build measurement in? We live in times where there is so much nuanced data available that we never had access to before. We can now measure environmental quality, energy performance, and space utilization on one end, and human physiology, and brain behavior on the other. Our understanding of the human brain, the human body, and the building have all become more sophisticated. We can now think of the brain and the building in new ways. The human brain responds to the building for sure, but also there is now brain in the building itself that can respond back to human needs. In an era of sensing systems and digital twins the building has a brain of its own. It is well on its way to becoming an intelligent organism that is climate responsive, but can also be community responsive. We are at this inflection point where the human brain and the building brain can be in conversation with each other. They can talk to each other. They can respond to each other. That’s an entire paradigm shift because that changes how we look at architecture and makes us stop looking at architecture as a static object or as an object at all, but rather a living breathing organism that interacts actively with the people that it’s for. And this means investing in people science as rigorously as we invest in building science. So what does that mean for someone like you? What does that mean for how architects can begin to think about approaching what it means to do architecture in this way?  For someone like me, and researchers in practice all over, this means finding ways of linking design to outcomes—during occupancy. One of my colleagues says it best when he says “there is nothing post about occupancy.” That is when our work is tested and really comes to life. Occupancy is what marks the transition of a building from being giant sculpture to architecture. Our work should always be judged not pre-, or post-, but during occupancy. The changes I see in our profession are making us focus on occupancy, be accountable to occupancy outcomes, and be able to link the performance of the building, to the performance of the people within that building, within the context of the climate and community that we serve. When we link design to outcomes, we link prediction to proof. We’ve done prediction—not enough, and not always well, but we are getting there. A lot of the modeling we do, a lot of the software we talk about, they’re all about prediction. Unfortunately we’ve reached a point as a society where we keep predicting what is going to happen. But at some point, we’re going to have to stop and ask ourselves, "where’s the proof?" We have to be honest when things didn’t really happen the way we thought they would. If design is about predictions, then we have to have some accountability to the proof. What’s an example of how you’ve put this way of thinking to work? As someone who bridges practice and academia, I believe we have to practice what we preach. Or die trying. Students at the university now are coming out with not just an incredible skillset, but also a level of citizenship we have not seen before, and so the courses I teach are set up to link students to professional mentors and professionals to state of the art academic thinking At our firm, one thing is we set up our own offices as living labs. We had discoveries in-house where we said, “Why don't we just test on ourselves before we do any renovation?” So we did. We were about to refresh a few offices so we set them up as living labs, which meant before we did anything, we did a lot of design diagnostics. We measured the energy use, environmental quality, and spatial quality, as well as the experience of our employees. We used old-school methods of interviews, and surveys as well as digital tools using sensors and spatial analytics. We had data on where everybody sits, how they move, what activities they do at certain points in time, and their personality types. Because these were our own offices, we had access to all of this data not just from before and after they moved into this space, but also every morning and evening for a period of time. That was important for us because we acknowledged that human experience is not defined in a single point in time. Our findings from the Chicago living lab showed an improvement in sleep quality and focus. We saw an improvement in overall reports of well-being. We also saw an improvement in air quality and some of the environmental measures. We were able to say that when the environment becomes better, the human response to the environment also becomes better. One of the things that’s been a challenge for us in our profession is that we have been many, many levels removed from final occupancy. We need to blur those boundaries. We need to be able to speak directly to the occupant. We need to understand that we work for them, that whatever we do is in service of that eventual stakeholder. Investment in research is investment in a "think, make, test" philosophy: getting to a point where every time we want to try a new design strategy, we test it. We have to understand that we are not only doing studies that are pre- or post-occupancy, but setting the stage for a living, breathing, learning ecosystem. We learn from the mistakes because we are living with them. And as we evolve, the system evolves with us. For more on the latest in AEC technology and for information about the upcoming TECH+ conference, visit https://techplusexpo.com/events/la/
Placeholder Alt Text

Can you capture a portrait of a city with a taxi-mounted sensor?

Big data and its purported utility comes with the attendant need to actually collect that data in the first place, meaning an increase of sensing devices being attached to all manner of things. That includes treating many everyday things, from skyscrapers to human beings, as sensors themselves. When it comes to the urban environment, data on air quality, weather, traffic, and other metrics, is becoming more important than ever. However, in general, the sensors that collect this data are fixed, attached to buildings or found in other stable spots. “They’re good in time, but not in space,” said Kevin O’Keeffe, a postdoc in MIT’s Senseable City Lab, in a release from the university. Airborne sensors such as drones, on the other hand, explained O'Keefe,  work well in space, but not in time. To collect greater data that more accurately reflects an entire city—in both space and time—mobile sensors would be needed at street level. Cities already have fleets of mobile devices close to the ground: vehicles. While private cars operate only sporadically, and buses run fixed routes, taxis spend all day and night traversing large swaths of cities. Incidentally, the lab also tried using garbage trucks, but they did not collect as much data as they predicted cabs could. Inexpensive sensors could be attached to taxis to provide researchers and others important data on the on-the-ground status of urban environments. However, to see the viability of this concept, the Senseable City Lab had to first find out just exactly how much ground taxis actually cover. And then, how many sensor-enabled taxis would it take to create an accurate picture of a city's terrain and air quality? By analyzing taxi routes in New York City, researchers discovered that a relatively small number of cars covered a pretty wide territory. It took just ten cabs to cover a third of Manhattan in a single day. And, while one might expect this to be particular to Manhattan and its orderly grid, the researchers also discovered that similar patterns occurred in cities across the world, from Vienna to Singapore. That said, because many taxis travel to similar areas and along high-traffic routes, the number of cabs it takes to cover even greater ground grows quite rapidly. In the case of Manhattan, it takes 30 taxis to cover half the island and over 1,000 to reach 85 percent. To realize the full potential of car-borne sensors, which can reach popular areas as well as underserved ones, O’Keeffe suggested a hybrid approach: placing sensors on taxis along with a few dedicated vehicles, à la Google Street View. The research team hopes that this data will help planners and politicians put together a more realistic idea about how mobile sensing could work in their city, its cost, and potential impacts. The data will also hopefully help cities tailor any mobile sensing projects to their particular needs, and the Senseable City Lab believes that these mobile sensors would be less expensive than traditional options. This research also appears in a paper recently published in the Proceedings of the National Academy of Sciences that was co-authored, along with O’Keeffe, by Amin Anjomshoaa, a researcher at the Senseable City Lab; Steven Strogatz, a professor of mathematics at Cornell University; Paolo Santi, a research scientist at the Senseable City Lab and the Institute of Informatics and Telematics of CNR in Pisa, Italy; and Carlo Ratti, director of the Senseable City Lab and professor of the practice in MIT’s Department of Urban Studies and Planning. Curious urbanites can check out the accompanying maps and infographics on the lab’s website.
Placeholder Alt Text

UNStudio launches a tech startup to "revolutionize" architectural technology

Self-described “open-source architecture studioUNStudio is spinning off the tech startup UNSense, which will focus on collecting data from buildings to ultimately improve how people occupy them. UNStudio co-founder and Dutch architect Ben van Berkel has called the move integral to incorporating technology with architecture, and the first step in future-proofing potential new projects. UNStudio is no stranger to futuristic concepts or designs, having built an undulating train station for Arnhem in the Netherlands, proposed a revolutionary new urban-rural farming system, and designed a Zaha-esque cable car system for Gothenburg, Sweden. Now, even though UNSense will be run as a separate sister company, UNStudio will develop “‘hardware’ for the built environment, while UNSense, by contrast, uses very different expertise to develop ‘software’ based applications.” Citing an aim to improve the environment and health of cities through more efficient design, UNSense will focus on using sensors to cut waste, create seamless interaction between the occupants and the building systems, and track air quality. UNSense will be located in its own office at the Freedom Lab Campus tech hub in Amsterdam. While the company was only just formed, it’s hit the ground running with the launch of a power-generating “solar brick” and recommendations for planning sustainable, people-focused cities that learn from the data they’re collecting. “I see a great opportunity as an architect to create buildings and cities that are sensible and sensitive to human beings,” said van Berkel in a statement. “The digital revolution is driving change in every part of our lives, except within the built environment. Now it’s time to catch up with technology.”
Placeholder Alt Text

Google recruits Carnegie Mellon University to create a "living lab" of smart city technologies

Google has awarded an endowment worth half a million dollars to Carnegie Mellon University (CMU) to build a “living lab” for the search engine giant’s Open Web of Things (OWT) expedition. OWT envisions a world in which access to networked technology is mediated through internet-connected buildings and everyday objects—beyond the screen of a smartphone or computer device.
“A future where we work seamlessly with connected systems, services, devices, and ‘things’ to support work practices, education and daily interactions.” -in a statement by Google’s Open Web of Things.
Carnegie Mellon’s enviable task is to become a testing ground for the cheap, ubiquitous sensors, integrated apps, and user-developed tools which Google sees as the key to an integrated machine future. If that sounds like mystical marketing copy, a recent project by CMU’s Human-Computer Interaction Institute sheds light on what a sensor-saturated “smart” city is capable of. The team headed by Anind K. Dey has created apps like Snap2It, which lets users connect to printers and other shared resources by taking photos of the device. Another application, Impromptu, offers relevant, temporary shared apps. For instance, if a sensor detects that you are waiting at a bus stop, you’ll likely be referred to a scheduling app. “The goal of our project will be nothing less than to radically enhance human-to-human and human-to-computer interaction through a large-scale deployment of the Internet of Things (IoT) that ensures privacy, accommodates new features over time, and enables people to readily design applications for their own use,” said Dey, lead investigator of the expedition and director of HCII. To create the living lab, the expedition will saturate the CMU campus with sensors and infrastructure, and recruit students and other campus members to create and use novel IoT apps. Dey plans on building tools that allow users to easily create their own IoT scripts. “An early milestone will include the development of our IoT app store, where any campus member and the larger research community will be able to develop and share an IoT script, action, multiple-sensor feed, or application easily and widely,” Dey said. “Because many novel IoT applications require a critical mass of sensors, CMU will use inexpensive sensors to add IoT capability to ‘dumb’ appliances and environments across the campus.” Researchers at CMU will work with Cornell, Stanford, and the University of Illinois at Urbana-Champaign to develop the project, code-named GIoTTo. The premise is that embedded sensors in buildings and everyday objects can be interwoven to create “smart” environments controlled and experienced through interoperable technologies.