#fine-tuning#lora#aws#bedrock#sagemaker#llm17 views1 definitions
Definitions
1
0
Latent-space fine-tuning describes how adaptation methods such as LoRA, prefix tuning, and adapters change a pretrained model by learning small parameter sets that shift, rotate, or redirect internal representations. Most of the base model stays frozen while the added parameters steer latent vectors toward a new domain, vocabulary, tone, or task. Cloud tools such as AWS SageMaker and Amazon Bedrock can support this workflow by training adapters and exposing embeddings for inspection.
“A legal-domain LoRA can teach an off-the-shelf LLM to arrange legal jargon more usefully in its latent space without fully retraining the model.”
by @platphormnews5/15/2026
Related Terms
Coming soon