LC LLamaMe API is now integrated with LaunchIT ver. 2.3 available in general availability (GA). This capability was presented during LC user meeting in March 2025, see presentation.
LLamaMe is LC's locally hosted large language model (LLM) service. LC provides general availability (GA) API access to locally hosted open-source LLMs served using the vLLM library in the CZ, RZ, and SCF. Feel free to join internal LC LLamaMe Microsoft teams channel for more updates!
Large Language Models hosted in LC
LC Network Zone | Large Language Model | Max Content Length | Infrastructure GPU |
---|---|---|---|
Collaboration Zone (CZ) | Meta-Llama-3.3-70B-Instruct | 8192 | 2 A100 80GB RAM |
Restricted Zone (RZ) | Meta-Llama-3.1-8B-Instruct | 4096 | 1 A100 40GB RAM |
Secure Compute Facility (SCF) | Meta-Llama-3.1-70B-Instruct-AWQ-INT4 | 4096 | 1 H100 80GB RAM |
NOTE LC locally hosted models are subject to change as they may be upgraded in the future. There is a rate limit of 10 requests per minute and API keys will expire after 30 days.
Getting a LLamaMe API Key
Provision an API key to access the LLamaMe endpoint through LaunchIT catalog. For further information please visit our documentation on LaunchIT.
Once in the LaunchIT catalog, select the workspace for the project you will be using API access for. Note that keys can also be directly provisioned from a workspace.

Once your API key has been created, you may access it at any time through your workspace dashboard. Your LLamaMe will be listed as a separate resource under your workspace dashboard, and the LLamaMe endpoint and model you have provisioned a key for will be displayed along with the key.

NOTE Keys expire every 30 days and must be regenerated to maintain your access to the LLamaMe API. API keys may be regenerated at any time through your LaunchIT workspace dashboard.

Getting Started with the LLamaMe API
Set your API_KEY as an environment variable (can be copied from your LaunchIT workspace dashboard).
export API_KEY = <your API key>
Here's an example Python script to check which model is being hosted and ask the LLM to tell you a joke. Replace the endpoint and model with the endpoint and model displayed in your LaunchIT workspace dashboard.
NOTE Please make sure your environment is configured properly. See this internal CZ Gitlab repo for more details.
import os from openai import OpenAI API_KEY = os.environ.get("API_KEY") client = OpenAI(base_url=<LLamaMe endpoint>, api_key=API_KEY) # Check which LLModel LC is hosting print(client.models.list()) chat_response = client.chat.completions.create( model="<LLamaMe model>", messages=[ {"role": "user", "content": "Tell me a joke."}, ] ) print("Chat response:", chat_response) # Enjoy!