Category Archives: Internet

A ubiquitous model of decision processes more accurate

Markov decision processes are mathematical models used to determine the best courses of action when both current circumstances and future consequences are uncertain. They’ve had a huge range of applications — in natural-resource management, manufacturing, operations management, robot control, finance, epidemiology, scientific-experiment design, and tennis strategy, just to name a few.

But analyses involving Markov decision processes (MDPs) usually make some simplifying assumptions. In an MDP, a given decision doesn’t always yield a predictable result; it could yield a range of possible results. And each of those results has a different “value,” meaning the chance that it will lead, ultimately, to a desirable outcome.

Characterizing the value of given decision requires collection of empirical data, which can be prohibitively time consuming, so analysts usually just make educated guesses. That means, however, that the MDP analysis doesn’t guarantee the best decision in all cases.

In the Proceedings of the Conference on Neural Information Processing Systems, published last month, researchers from MIT and Duke University took a step toward putting MDP analysis on more secure footing. They show that, by adopting a simple trick long known in statistics but little applied in machine learning, it’s possible to accurately characterize the value of a given decision while collecting much less empirical data than had previously seemed necessary.

In their paper, the researchers described a simple example in which the standard approach to characterizing probabilities would require the same decision to be performed almost 4 million times in order to yield a reliable value estimate.

With the researchers’ approach, it would need to be run 167,000 times. That’s still a big number — except, perhaps, in the context of a server farm processing millions of web clicks per second, where MDP analysis could help allocate computational resources. In other contexts, the work at least represents a big step in the right direction.

“People are not going to start using something that is so sample-intensive right now,” says Jason Pazis, a postdoc at the MIT Laboratory for Information and Decision Systems and first author on the new paper. “We’ve shown one way to bring the sample complexity down. And hopefully, it’s orthogonal to many other ways, so we can combine them.”

Unpredictable outcomes

In their paper, the researchers also report running simulations of a robot exploring its environment, in which their approach yielded consistently better results than the existing approach, even with more reasonable sample sizes — nine and 105. Pazis emphasizes, however, that the paper’s theoretical results bear only on the number of samples required to estimate values; they don’t prove anything about the relative performance of different algorithms at low sample sizes.

Prevent customer profiling and price gouging

Most website visits these days entail a database query — to look up airline flights, for example, or to find the fastest driving route between two addresses.

But online database queries can reveal a surprising amount of information about the people making them. And some travel sites have been known to jack up the prices on flights whose routes are drawing an unusually high volume of queries.

At the USENIX Symposium on Networked Systems Design and Implementation next week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Stanford University will present a new encryption system that disguises users’ database queries so that they reveal no private information.

The system is called Splinter because it splits a query up and distributes it across copies of the same database on multiple servers. The servers return results that make sense only when recombined according to a procedure that the user alone knows. As long as at least one of the servers can be trusted, it’s impossible for anyone other than the user to determine what query the servers executed.

“The canonical example behind this line of work was public patent databases,” says Frank Wang, an MIT graduate student in electrical engineering and computer science and first author on the conference paper. “When people were searching for certain kinds of patents, they gave away the research they were working on. Stock prices is another example: A lot of the time, when you search for stock quotes, it gives away information about what stocks you’re going to buy. Another example is maps: When you’re searching for where you are and where you’re going to go, it reveals a wealth of information about you.”

Honest broker

Of course, if the site that hosts the database is itself collecting users’ data without their consent, the requirement of at least one trusted server is difficult to enforce.

Wang, however, points to the increasing popularity of services such as DuckDuckGo, a search engine that uses search results from other sites, such as Bing and Yahoo, but vows not to profile its customers.

“We see a shift toward people wanting private queries,” Wang says. “We can imagine a model in which other services scrape a travel site, and maybe they volunteer to host the information for you, or maybe you subscribe to them. Or maybe in the future, travel sites realize that these services are becoming more popular and they volunteer the data. But right now, we’re trusting that third-party sites have adequate protections, and with Splinter we try to make that more of a guarantee.”

The world who wants to learn something

Raul Boquin, now an MIT senior, remembers the assignment from his freshman year as if it were yesterday. During a leadership workshop, he was asked to write a headline for a newspaper in his imagined future. The words that came to mind resonated so strongly that they now hang on the walls of his dorm room: “Equal opportunities in education for all.”

“I realized that I didn’t come to MIT because it was the best engineering school, but because it was the best place to discover what I was truly passionate about,” he says. “MIT pushed me to my limits and made me able to say ‘I don’t have to be the number one math person, or the number one computer science person, to make a difference’ with the passion I ended up having, which is education.”

Boquin, who is majoring in mathematics with computer science, predicts his life’s work will be to “find a way to adapt education to every person of the world who wants to learn something.”

More to education than teaching

Boquin’s first forays into education followed a relatively traditional path. As part of the undergraduate coursework he needed for his education concentration, he spent time observing teachers in local middle and high schools.

“But at the end of sophomore year, I realized that there was a lot more to education than just teaching.

The summer before his junior year, Boquin worked as a counselor and teaching assistant at Bridge to Enter Advanced Mathematics (BEAM). “It originally started as just a math camp for students in the summer, teaching them things like topology and number theory,” Boquin says. “These were seventh grade Hispanic and black children, and they loved it. And they were amazing at it.”

On a campus in upstate New York, Boquin taught classes by day and talked to students about his own work in mathematics by night. He also designed parts of the BEAM curriculum and came up with fun ways of teaching the lessons. “It was inspiring because it was like I wasn’t only a teacher, but I was a mentor and a friend,” he says.

Diagnose health issues like cognitive decline and cardiac disease

We’ve long known that blood pressure, breathing, body temperature and pulse provide an important window into the complexities of human health. But a growing body of research suggests that another vital sign – how fast you walk – could be a better predictor of health issues like cognitive decline, falls, and even certain cardiac or pulmonary diseases.

Unfortunately, it’s hard to accurately monitor walking speed in a way that’s both continuous and unobtrusive. Professor Dina Katabi’s group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has been working on the problem, and believes that the answer is to go wireless.

In a new paper, the team presents “WiGait,” a device that can measure the walking speed of multiple people with 95 to 99 percent accuracy using wireless signals.

The size of a small painting, the device can be placed on the wall of a person’s house and its signals emit roughly one-hundredth the amount of radiation of a standard cellphone. It builds on Katabi’s previous work on WiTrack, which analyzes wireless signals reflected off people’s bodies to measure a range of behaviors from breathing and falling to specific emotions.

“By using in-home sensors, we can see trends in how walking speed changes over longer periods of time,” says lead author and PhD student Chen-Yu Hsu. “This can provide insight into whether someone should adjust their health regimen, whether that’s doing physical therapy or altering their medications.”

WiGait is also 85 to 99 percent accurate at measuring a person’s stride length, which could allow researchers to better understand conditions like Parkinson’s disease that are characterized by reduced step size.

Hsu and Katabi developed WiGait with CSAIL PhD student Zachary Kabelac and master’s student Rumen Hristov, alongside undergraduate Yuchen Liu from the Hong Kong University of Science and Technology, and Assistant Professor Christine Liu from the Boston University School of Medicine. The team will present their paper in May at ACM’s CHI Conference on Human Factors in Computing Systems in Colorado.

How it works

Today, walking speed is measured by physical therapists or clinicians using a stopwatch. Wearables like FitBit can only roughly estimate speed based on step count, and GPS-enabled smartphones are similarly inaccurate and can’t work indoors. Cameras are intrusive and can only monitor one room. VICON motion tracking is the only method that’s comparably accurate to WiGate, but it is not widely available enough to be practical for monitoring day-to-day health changes.

Meanwhile, WiGait measures walking speed with a high level of granularity, without requiring that the person wear or carry a sensor. It does so by analyzing the surrounding wireless signals and their reflections off a person’s body. The CSAIL team’s algorithms can also distinguish walking from other movements, such as cleaning the kitchen or brushing one’s teeth.

Communication support in disaster zones

In the event of a natural disaster that disrupts phone and Internet systems over a wide area, autonomous aircraft could potentially hover over affected regions, carrying communications payloads that provide temporary telecommunications coverage to those in need.

However, such unpiloted aerial vehicles, or UAVs, are often expensive to operate, and can only remain in the air for a day or two, as is the case with most autonomous surveillance aircraft operated by the U.S. Air Force. Providing adequate and persistent coverage would require a relay of multiple aircraft, landing and refueling around the clock, with operational costs of thousands of dollars per hour, per vehicle.

Now a team of MIT engineers has come up with a much less expensive UAV design that can hover for longer durations to provide wide-ranging communications support. The researchers designed, built, and tested a UAV resembling a thin glider with a 24-foot wingspan. The vehicle can carry 10 to 20 pounds of communications equipment while flying at an altitude of 15,000 feet. Weighing in at just under 150 pounds, the vehicle is powered by a 5-horsepower gasoline engine and can keep itself aloft for more than five days — longer than any gasoline-powered autonomous aircraft has remained in flight, the researchers say.

The team is presenting its results this week at the American Institute of Aeronautics and Astronautics Conference in Denver, Colorado. The team was led by R. John Hansman, the T. Wilson Professor of Aeronautics and Astronautics; and Warren Hoburg, the Boeing Assistant Professor of Aeronautics and Astronautics. Hansman and Hoburg are co-instructors for MIT’s Beaver Works project, a student research collaboration between MIT and the MIT Lincoln Laboratory.

A solar no-go

Hansman and Hoburg worked with MIT students to design a long-duration UAV as part of a Beaver Works capstone project — typically a two- or three-semester course that allows MIT students to design a vehicle that meets certain mission specifications, and to build and test their design.

In the spring of 2016, the U.S. Air Force approached the Beaver Works collaboration with an idea for designing a long-duration UAV powered by solar energy. The thought at the time was that an aircraft, fueled by the sun, could potentially remain in flight indefinitely. Others, including Google, have experimented with this concept,  designing solar-powered, high-altitude aircraft to deliver continuous internet access to rural and remote parts of Africa.

But when the team looked into the idea and analyzed the problem from multiple engineering angles, they found that solar power — at least for long-duration emergency response — was not the way to go.

Artificial intelligence and the future of technology

When Alphabet executive chairman Eric Schmidt started programming in 1969 at the age of 14, there was no explicit title for what he was doing. “I was just a nerd,” he says.

But now computer science has fundamentally transformed fields like transportation, health care and education, and also provoked many new questions. What will artificial intelligence (AI) be like in 10 years? How will it impact tomorrow’s jobs? What’s next for autonomous cars?

These topics were all on the table on May 3, when the Computer Science and Artificial Intelligence Laboratory (CSAIL) hosted Schmidt for a conversation with CSAIL Director Daniela Rus at the Kirsch Auditorium in the Stata Center.

Schmidt discussed his early days as a computer science PhD at the University of California at Berkeley, where he looked up to MIT researchers like Michael Dertouzos. At Bell Labs he coded UNIX’s lexical-analysis program Lex before moving on to executive roles at Sun Microsystems, Novell, and finally Google, where he served as CEO from 2001 to 2011. In his current role as executive chairman of Google’s parent company, Schmidt focuses on Alphabet’s external matters, advising Google CEO Sundar Pichai and other senior leadership on business and policy.

Speaking with Rus on the topic of health care, Schmidt said that doing a better job of leveraging data will enable doctors to improve how they make decisions.

“Hospitals have enormous amounts of data, which is inaccessible to anyone except for themselves,” he said. “These [machine learning] techniques allow you to take all of that information, sum it all together, and actually produce outcomes.”

Schmidt also cited Google’s ongoing work in self-driving vehicles, including last week’s launch of 500 cars in Arizona, and addressed the issue of how technology will impact jobs in different fields.

“The economic folks would say that you can see the job that’s lost, but you very seldom can see the job that’s created,” said Schmidt. “While there will be a tremendous dislocation of jobs — and I’m not denying that — I think that, in aggregate, there will be more jobs.”

Rus also asked Schmidt about his opposition to the Trump administration’s efforts to limit the number of H1B visas that U.S. tech companies can offer to high-skilled foreign workers.

“At Google we want the best people in the world, regardless of sex, race, country, or what-have-you,” said Schmidt. “Stupid government policies that restrict us from giving us a fair chance of getting those people are antithetical to our mission [and] the things we serve.”

Schmidt ended the conversation by imploring students to take the skills they’ve learned and use them to work on the world’s toughest problems.

Diamond optical circuits could work at large scales

Quantum computers are experimental devices that offer large speedups on some computational problems. One promising approach to building them involves harnessing nanometer-scale atomic defects in diamond materials.

But practical, diamond-based quantum computing devices will require the ability to position those defects at precise locations in complex diamond structures, where the defects can function as qubits, the basic units of information in quantum computing. In today’s of Nature Communications, a team of researchers from MIT, Harvard University, and Sandia National Laboratories reports a new technique for creating targeted defects, which is simpler and more precise than its predecessors.

In experiments, the defects produced by the technique were, on average, within 50 nanometers of their ideal locations.

“The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it,” says Dirk Englund, an associate professor of electrical engineering and computer science who led the MIT team. “We’re almost there with this. These emitters are almost perfect.”

The new paper has 15 co-authors. Seven are from MIT, including Englund and first author Tim Schröder, who was a postdoc in Englund’s lab when the work was done and is now an assistant professor at the University of Copenhagen’s Niels Bohr Institute. Edward Bielejec led the Sandia team, and physics professor Mikhail Lukin led the Harvard team.

Initial size enables speedy analysis of laparoscopic procedures

Laparoscopy is a surgical technique in which a fiber-optic camera is inserted into a patient’s abdominal cavity to provide a video feed that guides the surgeon through a minimally invasive procedure.

Laparoscopic surgeries can take hours, and the video generated by the camera — the laparoscope — is often recorded. Those recordings contain a wealth of information that could be useful for training both medical providers and computer systems that would aid with surgery, but because reviewing them is so time consuming, they mostly sit idle.

Researchers at MIT and Massachusetts General Hospital hope to change that, with a new system that can efficiently search through hundreds of hours of video for events and visual features that correspond to a few training examples.

In work they presented at the International Conference on Robotics and Automation this month, the researchers trained their system to recognize different stages of an operation, such as biopsy, tissue removal, stapling, and wound cleansing.

But the system could be applied to any analytical question that doctors deem worthwhile. It could, for instance, be trained to predict when particular medical instruments — such as additional staple cartridges — should be prepared for the surgeon’s use, or it could sound an alert if a surgeon encounters rare, aberrant anatomy.

“Surgeons are thrilled by all the features that our work enables,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and senior author on the paper. “They are thrilled to have the surgical tapes automatically segmented and indexed, because now those tapes can be used for training. If we want to learn about phase two of a surgery, we know exactly where to go to look for that segment. We don’t have to watch every minute before that. The other thing that is extraordinarily exciting to the surgeons is that in the future, we should be able to monitor the progression of the operation in real-time.”

Joining Rus on the paper are first author Mikhail Volkov, who was a postdoc in Rus’ group when the work was done and is now a quantitative analyst at SMBC Nikko Securities in Tokyo; Guy Rosman, another postdoc in Rus’ group; and Daniel Hashimoto and Ozanan Meireles of Massachusetts General Hospital (MGH).

Report warns of hacking risk to electric grid

In a world where hackers can sabotage power plants and impact elections, there has never been a more crucial time to examine cybersecurity for critical infrastructure, most of which is privately owned.

According to MIT experts, over the last 25 years presidents from both parties have paid lip service to the topic while doing little about it, leading to a series of short-term fixes they liken to a losing game of “Whac-a-Mole.” This scattershot approach, they say, endangers national security.

In a new report based on a year of workshops with leaders from industry and government, the MIT team has made a series of recommendations for the Trump administration to develop a coherent cybersecurity plan that coordinates efforts across departments, encourages investment, and removes parts of key infrastructure like the electric grid from the internet.

Coming on the heels of a leak of the new administration’s proposed executive order on cybersecurity, the report also recommends changes in tax law and regulations to incentivize private companies to improve the security of their critical infrastructure. While the administration is focused on federal systems, the MIT team aimed to address what’s left out of that effort: privately-owned critical infrastructure.

“The nation will require a coordinated, multi-year effort to address deep strategic weaknesses in the architecture of critical systems, in how those systems are operated, and in the devices that connect to them,” the authors write. “But we must begin now. Our goal is action, both immediate and long-term.”

Entitled “Making America Safer: Toward a More Secure Network Environment for Critical Sectors,” the 50-page report outlines seven strategic challenges that would greatly reduce the risks from cyber attacks in the sectors of electricity, finance, communications and oil/natural gas. The workshops included representatives from major companies from each sector, and focused on recommendations related to immediate incentives, long-term research and streamlined regulation.

Vehicle systems in ongoing collaboration with Toyota

The MIT AgeLab will build and analyze new deep-learning-based perception and motion planning technologies for automated vehicles in partnership with the Toyota Collaborative Safety Research Center (CSRC). The new research initiative, called CSRC Next, is part of a five-year-old ongoing relationship with Toyota.

The first phase of projects with Toyota CSRC has been led by Bryan Reimer, a research scientist at MIT AgeLab, which is part of the MIT Center for Transportation and Logistics. Reimer manages a multidisciplinary team of researchers, and students focused on understanding how drivers respond to the increasing complexity of the modern operating environment. He and his team studied the demands of modern in-vehicle voice interfaces and found that they draw drivers’ eyes away from the road to a greater degree than expected, and that the demands of these interfaces need to be considered in the time course optimization of systems. Reimer’s study eventually contributed to the redesign of the instrumentation of the current Toyota Corolla and the forthcoming 2018 Toyota Camry. (Read more in the 2017 Toyota CSRC report.)

Reimer and his team are also building and developing prototypes of hardware and software systems that can be integrated into cars in order to detect everything about the state of the driver and the external environment. These prototypes are designed to work both with cars with minimal levels of autonomy and with cars that are fully autonomous.

Computer scientist and team member Lex Fridman is leading a group of seven computer engineers who are working on computer vision, deep learning, and planning algorithms for semi-autonomous vehicles. The application of deep learning is being used for understanding both the world around the car and human behavior inside it.

“The vehicle must first gain awareness of all entities in the driving scene, including pedestrians, cyclists, cars, traffic signals, and road markings,” Fridman says. “We use a learning-based approach for this perception task and also for the subsequent task of planning a safe trajectory around those entities.”