Skip to main content

Engagements tagged deep-learning

This is a test
MGHPCC

Yersinia pestis, the bacterium that causes the bubonic plague, uses a type III secretion system (T3SS) to inject toxins into host cells. The structure of the Y. pestis T3SS needle has not been modeled using AI or cryo-EM. T3SS in homologous bacteria have been solved using cryo-EM. Previously, we created possible hexamers of the Y. pestis T3SS needle protein, YscF, using CollabFold and AlphaFold2 Colab on Google Colab in an effort to understand more about the needle structure and calcium regulation of secretion. Hexamers and mutated hexamers were designed using data from a wet lab experiment by Torruellas et. al (2005). T3SS structures in homologous organisms show a 22 or 23mer structure where the rings of hexamers interlocked in layers. When folding was attempted with more than six monomers, we observed larger single rings of monomers. This revealed the inaccuracies of these online systems. To create a more accurate complete needle structure, a different computer software capable of creating a helical polymerized needle is required. The number of atoms in the predicted final needle is very high and more than our computational infrastructure can handle. For that reason, we need the computational resources of a supercomputer. We have hypothesized two ways to direct the folding that have the potential to result in a more accurate needle structure. The first option involves fusing the current hexamer structure into one protein chain, so that the software recognizes the hexamer as one protein. This will make it easier to connect multiple hexamers together. Alternatively, or additionally the cryo-EM structures of the T3SS of Shigella flexneri and Salmonella enterica Typhimurium can be used as models to guide the construction of the Y. pestis T3SS needle. The full AlphaFold library or a program like RoseTTAFold could help us predict protein-protein interactions more accurately for large structures. Based on our needs we have identified the TAMU ACES, Rockfish and Stampede-2 as promising resources for this project. The generated model of the Y. pestis T3SS YscF needle will provide insight into a possible structure of the needle. 

Status: Recruiting
AI for Business
San Diego State University

The research focus is to apply the pre-training techniques of Large Language Models to the encoding process of the Code Search Project, to improve the existing model and develop a new code searching model. The assistant shall explore a transformer or equivalent model (such as GPT-3.5) with fine-tuning, which can help achieve state-of-the-art performance for NLP tasks. The research also aims to test and evaluate various state-of-the-art models to find the most promising ones.

Status: Complete
Surgical Video Understanding using Video LLM
UCSC

This project aims to develop a deep learning-based system for analyzing surgical videos using multimodal LLM. The scope includes detecting surgical phases, recognizing instruments, identifying anomalies, and generating real-time or post-operative summaries. Expected outcomes include improved surgical workflow analysis, automated documentation, and enhanced training for medical professionals.

The project will explore state-of-the-art Video LLM architectures and develop new model specific for the surgical video understanding, along with software packages such as PyTorch, TensorFlow, OpenCV, and Hugging Face’s Transformers. The research need is to improve the interpretability and efficiency of surgical video analysis, leveraging multimodal learning to combine visual and textual understanding.

We need high-performance computing (HPC) clusters, large-scale storage, and GPU accelerators will be leveraged to train and fine-tune the models efficiently.

Status: On Hold
A brainwide “universal translator” for neural dynamics at single-cell, single-spike resolution
Columbia University

In this project, our primary goal is to develop a multimodal foundation model of the brain by combining large-scale, self-supervised learning with the IBL brainwide dataset. This model aims to serve as a "universal translator," facilitating automatic translation from neural activity to various outputs such as behavior, brain location, neural dynamics prediction, and information flow prediction. To achieve this, we will leverage ACCESS computational resources for model training, fine-tuning, and testing. These resources will support the computation-intensive tasks involved in training large-scale deep learning models on distributed GPUs, as well as processing and analyzing the extensive dataset. Additionally, we will utilize software packages tailored for deep learning to implement our algorithms and models effectively. Ultimately, the project's outcome will be shared as an open-source model, serving as a valuable resource for global neuroscience research and the development of brain-computer interfaces. With ACCESS resources, we aim to accelerate the advancement of neuroscience and enable broader participation in brain-related research worldwide.

Status: Declined