Blog

Co-Lab Student Developer Reflections: Building a Co-Lab Co-Pilot

By Jonathan Reyes // May 10, 2024

This blog post was written by Jonathan Reyes, Trinity '26.  Jonathan is a Co-Lab student employee on our Dev Team and worked on this project throughout the 23-24 Academic Year.

This past summer, I embarked on an exciting journey as a participant in Code+, a Duke summer program designed to give students a taste of life as software engineers. As AI began to dominate headlines, our project focused on harnessing the power of artificial intelligence to create an innovative customer service chat platform for OIT. Little did I know, this was just the beginning of my exploration into the potential of AI in education.

Upon completing the program, I found myself eager to continue my work beyond the summer. Despite missing the deadline to apply for a student developer position at the Co-Lab, I reached out to one of the Code+ program coordinators, Jen Vizas, expressing my interest in furthering my AI endeavors. She got me in touch with Michael Faber, Senior Manager at the Co-Lab and we arranged a meeting along with Danai Adkisson and Daniel Davis, who would become my project lead.

My proposal aimed to build upon the groundwork laid during the summer—an intuitive conversational assistant designed to aid students in accessing relevant information conveniently and effortlessly. Michael embraced the concept wholeheartedly, and with this newfound freedom to explore and innovate, we delved into a brainstorming session eager to explore the boundless possibilities AI could offer within the Co-Lab. Our goal became to take advantage of the wealth of Pathways data to enhance the student experience. 

Content-Based Filtering without AI

We decided to start with a foundation that doesn't rely on generative AI for the purpose of demonstrating the core functionalities and principles of the project. We developed a recommendation system that relies on content-based filtering, suggesting modules similar to those a user has already taken. It involved utilizing natural language processing and machine learning techniques like TF-IDF for text vectorization and pairwise similarity between module descriptions calculated using a sigmoid kernel function. This initial phase set a solid foundation for potentially integrating more advanced AI methods in the future, aligning with the envisioned capabilities of the assistant.

Integrating Artificial Intelligence from OpenAI

It just so happened that this Fall, OpenAI released their new set of tools that enabled developers to create conversational agents with features such as "Threads" to help manage longer conversations and "Retrieval" to help store text, along with improvements to the function-calling functionality with their latest model: GPT-4. Our envisioned process involved a function capable of retrieving course descriptions via an API call. However, the reliance solely on model-based function calling posed significant challenges. According to OpenAI, the latest models, such as ‘gpt-3.5-turbo-0125’ and ‘gpt-4-turbo-preview’, exhibit improved discernment in determining when to invoke functions and provide JSON responses aligning closely with function signatures. 

While this capability holds promise, it also presents inherent inconsistencies. Without explicit mention of the course name from the user, our ability to obtain the desired information via API calls would be compromised, undermining the essence of our project. Furthermore, the subpar performance of models other than GPT-4 in function detection, coupled with the considerable expense associated with deploying GPT-4 at scale, meant this approach wouldn’t make sense financially. 

LangChain + Generated SQL queries

So we looked at alternative ways as opposed to documents and API calls to retrieve our data. What if we just  accessed a database directly? That’s when we stumbled upon LangChain. 

LangChain is a framework designed for creating context-aware applications using language models. It operates through sequences of calls called chains, which can include interactions with language models, tools, or preprocessing steps for data.

One specific application involves querying an SQL database table. The process entails converting user questions into SQL queries, executing them, and responding to the user based on the query results. LangChain also introduced the concept of agents powered by language models. Agents are designed to dynamically determine sequences of actions, offering a departure from hardcoded approaches. An example being the SQL agent, which leverages dynamic few-shot prompting, a technique employing FAISS (Facebook AI Similarity Search) and OpenAI's embeddings model to incorporate relevant queries into prompts. 

Our initial interactions with the SQL agent revealed significant performance issues. Despite our efforts to fine-tune the language model with dynamic few-shot prompting, it consistently struggled to generate correct SQL queries. This not only undermined its utility but also necessitated extensive manual intervention to rectify errors—a process that quickly became unsustainable. Moreover, while our model excelled in minimizing token usage, it fell short in managing operational costs, particularly concerning the Azure SQL server database. The expenses accumulated from database operations overshadowed the gains made in token optimization, raising concerns about the framework's economic viability.

Vectorization 

However, this idea of only passing the data we needed by embedding example queries and performing similarity search with the question being asked to minimize costs and maximize speed remained in our minds. That’s when we realized: Why not just embed our entire database? Vector embeddings weren't completely new to us, we had worked on a project utilizing this technology in the beginning of the school year. Now, it was just at a much larger scale and we found ourselves faced with the task of connecting the dots. That’s when Michael got me in touch with Daniel Medina, a graduate student under Jon Reifschneider’s CREATE Lab. Daniel's expertise in vector databases alongside expansive language models sparked a dialogue outlining the tools and processes needed to make our vision a reality.

Transitioning from the SQL agent and Azure’s SQL Server database, we embraced MongoDB and its Atlas Vector Search tool. The difference was night and day. With the integration of OpenAI's 'text-embedding-3-small model', our revamped pipeline not only embeds user queries but also the descriptions of all courses within our database. Leveraging the advanced Hierarchical Navigable Small Worlds (HNSW) algorithm within Atlas Vector Search, we navigate an intricate multi-dimensional space of vectors to pinpoint the most  relevant courses in response to a query, assigning each a score.  Subsequently, the selected course seamlessly integrates into our model's prompting before being presented back to the user.

This pivotal moment encapsulates the essence of innovation and collaboration. Our model as of today is now able to answer questions in less than 3 seconds with astonishing accuracy rates. Our next step? We envision a dynamic workflow where user queries are first categorized using a model and then routed to a specialized language model tailored to their context. This method (fingers crossed) guarantees the best possible answers for every query. 

Next Steps

As our journey continues, we remain committed to exploring the intersection of AI and education. While our initial projects may have been stepping stones, they have laid the foundation for future innovation within Duke's ecosystem. Who knows, perhaps our models will one day be integrated into the Pathways website or serve as the backbone of educational chat platforms. In the end, what started as a summer program has blossomed into a collaborative effort to revolutionize the way students engage with educational resources. With passion, perseverance, and a sprinkle of AI magic, the possibilities are limitless. Stay tuned for the next chapter in our AI adventure at Duke's Innovation Co-Lab.