Yale Engineering Advances AI Innovation with Seed Funding for High-Impact Research and Workshops
The Yale School of Engineering & Applied Science has awarded seed research grants to support new, ambitious, and speculative research in artificial intelligence. These grants, a strategic initiative aligned with Yale Engineering's commitment to AI as a research priority, will empower researchers to pursue pioneering projects across a range of critical areas, from foundational AI research to practical applications that intersect with fields such as materials science, environmental sustainability, and healthcare.
This year's awardees include interdisciplinary teams exploring innovative ways to harness AI's potential, with projects designed to achieve impact through technological breakthroughs, community engagement, and industry partnerships. Funded projects were selected based on their potential to drive advancements that support Yale Engineering's strategic vision and to position Yale researchers for future external funding opportunities.
"AI is a powerful and rapidly-evolving tool, and while much of the public excitement tends to focus on its natural-language applications and realistic mimicry, its potential uses are much broader and more profound than that," said Yale Engineering Dean Jeffrey Brock. "These projects demonstrate just a few of the ways that our faculty are taking a strategic approach to advancing AI, from tackling the problem of 'hallucinations' to devising new brain-inspired approaches to computer memory systems."
Awarded projects and workshops span Yale Engineering's strategic focus areas in AI, including the technological aspects of AI, its applications, and its impact on people and society. Projects include the development of interpretable AI models for complex scientific reasoning, applications exploring the integration of AI in sustainable materials and medical diagnostics, and enhancing storytelling in science and engineering. Workshops funded under this initiative will explore AI's potential in transforming engineered wood for sustainable construction and foster interdisciplinary dialogue on multimodal deep learning.
Supported by Yale Engineering and the Office of the Provost, the competitive seed funding program is designed to provide Yale Engineering faculty and their collaborators from across the university with resources to generate preliminary results, strengthen their research portfolios, and enhance competitiveness for external funding. Awarded projects are eligible for additional support in the form of cloud credits from Amazon and Google, further amplifying their capacity to leverage cutting-edge resources in pursuit of pioneering research.
This year's funded research and workshop proposals are:
Brain-Inspired Memory Systems for AI Infrastructure
Awardees: Abhishek Bhattacharjee & Anurag Khandelwal (Computer Science)
Yale Engineering researchers Abhishek Bhattacharjee and Anurag Khandelwal are pioneering a novel approach to solve a critical bottleneck in AI infrastructure: memory system limitations. As the computational demands of AI rapidly increase, traditional memory systems lag, slowing down overall performance despite advances in processing power. Their project draws from cognitive science principles for improving how data is moved and stored in memory. By modeling memory management on the human brain's ability to handle "hot" (likely to be needed soon) and "cold" (unlikely to be needed soon) memories, the team aims to optimize data flow and enhance processing speeds in AI tasks.
The researchers will use the Expected Value of Control model, a well-established cognitive concept that explains how the brain manages focus based on anticipated rewards, to improve memory allocation in AI systems. Existing state-of-the-art algorithms like Linux's MG-LRU often exhibit inconsistent performance with modern AI workloads. In contrast, Bhattacharjee and Khandelwal's brain-inspired approach could streamline memory usage, ensuring AI systems operate smoothly without costly slowdowns. With significant and existing interest from industry leaders, their work holds promise for a transformative impact on AI infrastructure, potentially setting a new standard for memory systems in commercial servers.
Exploring Photo-Electro-Chemical Neural Network for Energy-Efficient AI Computing
Awardees: Shu Hu (Chemical & Environmental Engineering) & Fengnian Xia (Electrical & Computer Engineering)
In this interdisciplinary project, Shu Hu and Fengnian Xia are pioneering an ambitious project to address another pressing challenge of AI: the high energy demands of digital AI computing. Their goal is to design a new type of AI hardware that mimics the brain's energy efficiency. As AI workloads grow, particularly with the rise of large language models, energy-efficient computing has become a top priority. Traditional AI infrastructure struggles to balance performance with energy costs, underscoring the need for innovative hardware solutions.
The research team's approach uses photo-electro-chemical processes to create a 3D, reconfigurable neural network. This design leverages the brain's adaptability, enabling neural networks to change their structure and connectivity based on specific needs. By creating an all-analogue, brain-inspired computing model, they hope to achieve higher energy efficiency without sacrificing performance, opening doors for sustainable AI hardware.
Improving Human Storytelling Skills in Science and Engineering with Generative AI
Awardees: Marynel Vázquez (Computer Science), Ryan Wepler, and Lauren Gonzalez (Yale Poorvu Center for Teaching & Learning)
Computer scientist Marynel Vázquez, along with colleagues from Yale's Poorvu Center for Teaching & Learning, is spearheading a project to develop an AI tool that helps science and engineering writers improve their storytelling skills. Traditional AI tools support technical aspects of writing, such as grammar and tone, but often overlook the narrative structure needed to engage audiences. This project aims to create an AI-powered agent that assists writers in constructing compelling story arcs tailored to their audience and purpose, helping researchers convey their ideas more persuasively. By developing storytelling skills, scientists can communicate the value of their work more effectively, leading to greater public engagement and impactful research programs.
The team's approach is unique in two key areas. First, they will explore personalized feedback, where the AI tailors suggestions to the specific goals and style of each writer. Second, the tool will introduce interactive learning techniques, encouraging writers to refine their storytelling through active engagement rather than passive correction. By leveraging LLMs and insights from writing pedagogy, the project aspires to create an AI agent that not only enhances written communication in STEM fields but also promotes deeper learning and understanding.
Interpretable AI Models for Physics Reasoning
Awardees: John Sous (Applied Physics), Anna Gilbert (Electrical & Computer Engineering), and Omar Montasser (Statistics & Data Science)
Yale Engineering's John Sous and Anna Gilbert, in collaboration with Omar Montasser from the Department of Statistics & Data Science, look to develop AI models capable of transparent, interpretable reasoning in physics. While AI systems have made strides in natural language and mathematical problem-solving, they still struggle with complex reasoning tasks and often generate "hallucinations" – incorrect outputs with high confidence. To tackle this, the team proposes a "mechanistic interpretability" approach inspired by physics. This involves examining how simple, interpretable models, like two-layer transformers, can handle mathematical operations fundamental to physics reasoning. This seed-funded project will initially focus on tasks like modular arithmetic and the dynamics of chaotic systems, with plans to explore how AI can reliably predict outcomes in physics.
This research will address a key challenge: creating AI models that are both accurate and understandable, especially for scientific applications. By studying the inner workings of models trained to predict the behavior of chaotic systems, such as a double pendulum, they seek to uncover ways to improve AI's robustness in unfamiliar scenarios. Success in this effort could lead to powerful AI tools for scientific discovery and increase trust in AI applications for complex problem-solving.
Graph Representation Learning and Retrieval for Domain-Specific Large Language Models
Awardees: Rex Ying (Computer Science), Leandros Tassiulas (Electrical & Computer Engineering) and Hua Xu (School of Medicine)
Yale researchers Rex Ying, Leandros Tassiulas, and Hua Xu are pioneering a new framework to enhance the capabilities of large language models (LLMs) in specialized fields like telecommunications and medicine.
While large language models (LLMs) have transformed general language processing, they often struggle with domain-specific tasks, lacking the specialized knowledge and precision required in science and engineering. Additionally, LLMs are prone to "hallucinations," generating confident yet inaccurate responses. This is especially problematic in science and engineering applications where accuracy and reliability are crucial. To that end, the research team intends to create a new framework to enhance the capabilities of large language models in specialized fields like telecommunications and medicine.
The team's approach introduces a graph-based retrieval-augmented generation (RAG) technique, which allows LLMs to access domain-specific knowledge stored in graph structures, reducing hallucinations and improving response relevance by connecting related literature more precisely. By fine-tuning LLMs with this graph-based structure, the researchers aim to create models that not only understand technical content more accurately but also retain essential connections between documents. In addition to reducing errors, this method enhances the LLM's ability to process complex relationships within specialized topics. Initial applications will focus on assisting telecom engineers and biomedical professionals, enabling LLMs to support diagnostics, literature retrieval, and even patient education through reliable, expert-driven AI models.
AI for Engineered Wood Workshop
Awardee: Liangbing Hu (Electrical & Computer Engineering)
The "AI for Engineered Wood" workshop, part of Yale's Sustainable Materials Research Summit (SMART) 2025, will delve into the powerful role of artificial intelligence in advancing engineered wood. By convening experts from AI, materials science, environment, architecture, and environmental engineering, the workshop will foster collaboration to tackle sustainability, performance, and cost-effectiveness challenges in engineered wood.
Key objectives include establishing a research roadmap for Yale, securing external funding, and enhancing Yale's leadership in AI-driven sustainability research. The workshop will demonstrate how AI can optimize the design, production, and application of wood-based materials, which are essential for reducing CO₂ emissions in the building sector.
The event will feature sessions on AI's impact in materials science, design optimization, and construction, followed by a panel on future directions in AI applications for engineered wood. With a distinguished lineup of speakers, the workshop will create pathways for impactful research partnerships, fostering technological advances that could reshape the sustainable building materials industry.
Multimodal Deep Learning Towards the Future of AI Workshop
Awardees: Alex Wong, Arman Cohan, Rex Ying (Computer Science) and Smita Krishnaswamy (School of Medicine/Computer Science)
This multi-PI-led workshop aims to drive innovation by exploring how AI can effectively integrate diverse data types – such as images, language, audio, and graphs – to tackle complex scientific and engineering challenges.
The workshop plans to bring together AI experts and domain specialists to bridge the gap between specialized knowledge and advanced AI methodologies. By leveraging insights from multiple data modalities, the event will encourage new approaches to enhance AI's capability in applications that span biology, chemistry, engineering, telecommunications, and beyond.
The program will include monthly seminars and an annual full-day event, combining keynotes, discussions, and collaborative sessions. The main goals are to foster communication between domain and AI experts, develop foundation models that can leverage multimodal data for robust solutions, and inspire the cross-application of deep-learning techniques across modalities. This initiative is positioned to empower Yale researchers to lead in developing next-generation AI models that can integrate and reason with multiple data types, setting a foundation for breakthroughs across a wide array of fields.