Research AI

UMBC has a long history of faculty in Computer Science and Information Systems working on research that would be classified as artificial intelligence (AI) dating back to the mid-1980s. The UMBC Center for AI has more than 60 faculty members with research interests in AI and related areas, including robotics, machine learning, natural language understanding, data science, image processing, multi-agent systems, large language models, knowledge representation and reasoning, planning, knowledge graphs, and neural networks. These faculty members work in 30 laboratories and research centers and teach many AI-related courses across departments and disciplines.

With the recent introduction of generative AI, we now see the opportunity for AI, specifically large-language models (LLM) to be used for research in many different ways: to improve research productivity in organizing documents, as a way to summarize qualitative data, or to be used in the development of research. This work often utilizes a variety of resources, some of the resources available for AI research are listed below:

Chip-GPU (formerly the Ada) cluster

AI computation often relies upon the very specialized graphic processor units (GPUs).

The Chip-GPU cluster consists of twenty nodes with two 24-core Intel Cascade Lake CPUs and 384 GB of memory each, as well as ten NVIDIA nodes of various GPU architectures that can be used for research.  The Chip-GPU subcommittee is a faculty governance group that determines the CHIP-GPU usage policies and provides input to DoIT on needs.

  • Four nodes with eight 2080 Ti GPUs each
  • Seven nodes with eight RTX 6000 GPUs each
  • Two nodes have eight RTX 8000 GPUs with an extra 384 GB of memory each
  • 8 nodes with four L40S Nvidia GPUs each
  • 2 nodes with two H100 Nvidia GPUs each

In addition to the Chip-GPU, DoIT has purchased a node with two H100 NVIDIA GPUs that is being configured to run Meta’s open-source Llama LLM. This LLM provides faculty with the opportunity to run LLM-based research without incurring costs when LLM’s must be run programmatically.

Cloud Computing AI

UMBC has cloud computing contracts for Microsoft Azure (Open AI) , Google Cloud (Gemini AI), and Amazon AWS Bedrock (Anthropic and Meta). The vendor cloud computing options provide a wide range of commercial options that faculty can utilize in their research. There is a cost when using cloud computing, Doit has contracts are in place and if your grant allows the purchase of cloud computing resources we can set up accounts specific to your grant for chargebacks. DoIT has worked with some different cloud vendors and has found that cloud computing costs for AI are quite reasonable, if architected correctly. Please  submit a ticket and let DoIT know how we can help you.

Resources:

One advantage of using cloud computing for generative AI development is that each of the cloud computing vendors have built powerful development environments with all the appropriate libraries needed to build a generative AI application.

Microsoft Azure Foundry (formerly Azure AI Studio)

Google Vertex AI 

AWS Bedrock IDE for AI

Using Generative AI for Research Productivity

Each week there are new developments in generative AI. For faculty interested in getting started we recommend exploring some of the GenAI Tools that are free. Faculty can request access to Amplify through the AI Tools Support. Alternately, if you heavily use Microsoft software, consider getting CoPilot or if you are a heavy Google user, try out Gemini AI.