America opens headquarters steps from MIT campus

These are not your grandmother’s fibers and textiles. These are tomorrow’s functional fabrics — designed and prototyped in Cambridge, Massachusetts, and manufactured across a network of U.S. partners. This is the vision of the new headquarters for the Manufacturing USA institute called Advanced Functional Fabrics of America (AFFOA) that opened Monday at 12 Emily Street, steps away from the MIT campus.

AFFOA headquarters represents a significant MIT investment in advanced manufacturing innovation. This facility includes a Fabric Discovery Center that provides end-to-end prototyping from fiber design to system integration of new textile-based products, and will be used for education and workforce development in the Cambridge and greater Boston community. AFFOA headquarters also includes startup incubation space for companies spun out from MIT and other partners who are innovating advanced fabrics and fibers for applications ranging from apparel and consumer electronics to automotive and medical devices.

MIT was a founding member of the AFFOA team that partnered with the Department of Defense in April 2016 to launch this new institute as a public-private partnership through an independent nonprofit also founded by MIT. AFFOA’s chief executive officer is Yoel Fink. Prior to his current role, Fink led the AFFOA proposal last year as professor of materials science and engineering and director of the Research Laboratory for Electronics at MIT, with his vision to create a “fabric revolution.” That revolution under Fink’s leadership was grounded in new fiber materials and textile manufacturing processes for fabrics that see, hear, sense, communicate, store and convert energy, and monitor health.

From the perspectives of research, education, and entrepreneurship, MIT engagement in AFFOA draws from many strengths. These include the multifunctional drawn fibers developed by Fink and others to include electronic capabilities within fibers that include multiple materials and function as devices. That fiber concept developed at MIT has been applied to key challenges in the defense sector through MIT’s Institute for Soldier Nanotechnology, commercialization through a startup called OmniGuide that is now OmniGuide Surgical for laser surgery devices, and extensions to several new areas including neural probes by Polina Anikeeva, MIT associate professor of materials science and engineering. Beyond these diverse uses of fiber devices, MIT faculty including Greg Rutledge, the Lammot du Pont Professor of Chemical Engineering, have also led innovation in predictive modeling and design of polymer nanofibers, fiber processing and characterization, and self-assembly of woven and nonwoven filters and textiles for diverse applications and industries.

Rutledge coordinates MIT campus engagement in the AFFOA Institute, and notes that “MIT has a range of research and teaching talent that impacts manufacturing of fiber and textile-based products, from designing the fiber to leading the factories of the future. Many of our faculty also have longstanding collaborations with partners in defense and industry on these projects, including with Lincoln Laboratory and the Army’s Natick Soldier Research Development and Engineering Center, so MIT membership in AFFOA is an opportunity to strengthen and grow those networks.”

The design of industrial processes for drug manufacturing

When organic chemists identify a useful chemical compound — a new drug, for instance — it’s up to chemical engineers to determine how to mass-produce it.

There could be 100 different sequences of reactions that yield the same end product. But some of them use cheaper reagents and lower temperatures than others, and perhaps most importantly, some are much easier to run continuously, with technicians occasionally topping up reagents in different reaction chambers.

Historically, determining the most efficient and cost-effective way to produce a given molecule has been as much art as science. But MIT researchers are trying to put this process on a more secure empirical footing, with a computer system that’s trained on thousands of examples of experimental reactions and that learns to predict what a reaction’s major products will be.

The researchers’ work appears in the American Chemical Society’s journal Central Science. Like all machine-learning systems, theirs presents its results in terms of probabilities. In tests, the system was able to predict a reaction’s major product 72 percent of the time; 87 percent of the time, it ranked the major product among its three most likely results.

“There’s clearly a lot understood about reactions today,” says Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT and one of four senior authors on the paper, “but it’s a highly evolved, acquired skill to look at a molecule and decide how you’re going to synthesize it from starting materials.”

With the new work, Jensen says, “the vision is that you’ll be able to walk up to a system and say, ‘I want to make this molecule.’ The software will tell you the route you should make it from, and the machine will make it.”

With a 72 percent chance of identifying a reaction’s chief product, the system is not yet ready to anchor the type of completely automated chemical synthesis that Jensen envisions. But it could help chemical engineers more quickly converge on the best sequence of reactions — and possibly suggest sequences that they might not otherwise have investigated.

Jensen is joined on the paper by first author Connor Coley, a graduate student in chemical engineering; William Green, the Hoyt C. Hottel Professor of Chemical Engineering, who, with Jensen, co-advises Coley; Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science; and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science.

Computers for coming superstorm of data

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

Power consumption could help make the systems portable

In recent years, the best-performing artificial-intelligence systems — in areas such as autonomous driving, speech recognition, computer vision, and automatic translation — have come courtesy of software systems known as neural networks.

But neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

Last year, MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices.

Now, Sze and her colleagues have approached the same problem from the opposite direction, with a battery of techniques for designing more energy-efficient neural networks. First, they developed an analytic method that can determine how much power a neural network will consume when run on a particular type of hardware. Then they used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The researchers describe the work in a paper they’re presenting next week at the Computer Vision and Pattern Recognition Conference. In the paper, they report that the methods offered as much as a 73 percent reduction in power consumption over the standard implementation of neural networks, and as much as a 43 percent reduction over the best previous method for paring the networks down.

Get miniature smart drones off the ground

In recent years, engineers have worked to shrink drone technology, building flying prototypes that are the size of a bumblebee and loaded with even tinier sensors and cameras. Thus far, they have managed to miniaturize almost every part of a drone, except for the brains of the entire operation — the computer chip.

Standard computer chips for quadcoptors and other similarly sized drones process an enormous amount of streaming data from cameras and sensors, and interpret that data on the fly to autonomously direct a drone’s pitch, speed, and trajectory. To do so, these computers use between 10 and 30 watts of power, supplied by batteries that would weigh down a much smaller, bee-sized drone.

Now, engineers at MIT have taken a first step in designing a computer chip that uses a fraction of the power of larger drone computers and is tailored for a drone as small as a bottlecap. They will present a new methodology and design, which they call “Navion,” at the Robotics: Science and Systems conference, held this week at MIT.

The team, led by Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics at MIT, and Vivienne Sze, an associate professor in MIT’s Department of Electrical Engineering and Computer Science, developed a low-power algorithm, in tandem with pared-down hardware, to create a specialized computer chip.

The key contribution of their work is a new approach for designing the chip hardware and the algorithms that run on the chip. “Traditionally, an algorithm is designed, and you throw it over to a hardware person to figure out how to map the algorithm to hardware,” Sze says. “But we found by designing the hardware and algorithms together, we can achieve more substantial power savings.”

“We are finding that this new approach to programming robots, which involves thinking about hardware and algorithms jointly, is key to scaling them down,” Karaman says.

The new chip processes streaming images at 20 frames per second and automatically carries out commands to adjust a drone’s orientation in space. The streamlined chip performs all these computations while using just below 2 watts of power — making it an order of magnitude more efficient than current drone-embedded chips.

Karaman, says the team’s design is the first step toward engineering “the smallest intelligent drone that can fly on its own.” He ultimately envisions disaster-response and search-and-rescue missions in which insect-sized drones flit in and out of tight spaces to examine a collapsed structure or look for trapped individuals. Karaman also foresees novel uses in consumer electronics.