Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event
Presentations targeted high-impact intersections of AI and other areas, such as health care, business, and education.
Presentations targeted high-impact intersections of AI and other areas, such as health care, business, and education.
Composed of “computing bilinguals,” the Undergraduate Advisory Group provides vital input to help advance the mission of the MIT Schwarzman College of Computing.
But a new study shows how advanced steelmaking technologies could substantially reduce carbon emissions.
The MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.
A new framework from the MIT-IBM Watson AI Lab supercharges language models, so they can reason over, interactively develop, and verify valid, complex travel agendas.
A new book from Professor Munther Dahleh details the creation of a unique kind of transdisciplinary center, uniting many specialties through a common need for data science.
Forget optimists vs. Luddites. Most people evaluate AI based on its perceived capability and their need for personalization.
The winning essay of the Envisioning the Future of Computing Prize puts health care disparities at the forefront.
With demand for cement alternatives rising, an MIT team uses machine learning to hunt for new ingredients across the scientific literature.
In an annual tradition, MIT affiliates embarked on a trip to Washington to explore federal lawmaking and advocate for science policy.
MIT’s Initiative for New Manufacturing extends a deep Institute legacy of expanding US growth and jobs through industrial production.
The Institute-wide effort aims to bolster industry and create jobs by driving innovation across vital manufacturing sectors.
Sendhil Mullainathan brings a lifetime of unique perspectives to research in behavioral economics and machine learning.
Researchers share the design and implementation of an incentive-based Space Sustainability Rating.
Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.