AI Navigate

Cognitively Layered Data Synthesis for Domain Adaptation of LLMs to Space Situational Awareness

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the challenge of adapting large language models (LLMs) to the complex engineering domain of space situational awareness (SSA), highlighting issues like insufficient structural alignment and lack of high-quality supervised fine-tuning data.
  • It proposes BD-FDG, a framework based on Bloom's Taxonomy, which organizes knowledge structurally, models questions across nine categories and six cognitive levels, and applies automated quality control to generate fine-tuning data with graded difficulty and domain rigor.
  • Using BD-FDG, the authors create a large SSA-specific dataset (SSA-SFT) with approximately 230K samples and fine-tune the Qwen3-8B model to produce SSA-LLM-8B, achieving significant performance improvements in SSA tasks while maintaining general benchmark capabilities.
  • The results demonstrate that cognitive layering and structured data generation is a promising and effective approach for domain-specific adaptation of LLMs in complex engineering areas and offers a transferable methodology for other domains.

Computer Science > Artificial Intelligence

arXiv:2603.09231 (cs)
[Submitted on 10 Mar 2026]

Title:Cognitively Layered Data Synthesis for Domain Adaptation of LLMs to Space Situational Awareness

View a PDF of the paper titled Cognitively Layered Data Synthesis for Domain Adaptation of LLMs to Space Situational Awareness, by Ding Linghu and 8 other authors
View PDF HTML (experimental)
Abstract:Large language models (LLMs) demonstrate exceptional performance on general-purpose tasks. however, transferring them to complex engineering domains such as space situational awareness (SSA) remains challenging owing to insufficient structural alignment with mission chains, the absence of higher-order cognitive supervision, and poor correspondence between data quality criteria and engineering specifications. The core bottleneck is the construction of high-quality supervised fine-tuning (SFT) datasets. To this end, we propose BD-FDG (Bloom's Taxonomy-based Domain-specific Fine-tuning Data Generation), a framework that addresses incomplete knowledge coverage, shallow cognitive depth, and limited quality controllability through three mechanisms: structured knowledge organization, cognitively layered question modeling, and automated quality control. The framework uses a knowledge tree to ensure structured corpus coverage, designs a question generation scheme spanning nine categories and six cognitive levels from Remember to Create to produce samples with a continuous difficulty gradient, and applies a multidimensional scoring pipeline to enforce domain rigor and consistency. Using BD-FDG, we construct SSA-SFT, a domain dataset of approximately 230K samples, and fine-tune Qwen3-8B to obtain SSA-LLM-8B. Experiments show that SSA-LLM-8B achieves relative BLEU-1 improvements of 144\% (no-think) and 176\% (think) on the domain test set and a win rate of 82.21\% over the baseline in arena comparisons, while largely preserving general benchmark performance (MMLU-Pro, MATH-500). These results validate SFT data construction driven by cognitive layering as an effective paradigm for complex engineering domains and provide a transferable framework for domain-specific LLM adaptation.
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09231 [cs.AI]
  (or arXiv:2603.09231v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09231
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Ding Linghu [view email]
[v1] Tue, 10 Mar 2026 06:04:53 UTC (234 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Cognitively Layered Data Synthesis for Domain Adaptation of LLMs to Space Situational Awareness, by Ding Linghu and 8 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.