Autism Spectrum Disorder presents unique challenges in daily life for patients, families, and caregivers, often requiring personalized, real-time support. LLM4Autism is a novel edge-deployed Large Language Model designed to enhance autonomy, reduce disability severity, and promote social inclusion through local, privacy-preserving AI assistance. Leveraging federated learning, multimodal data integration, and explainable AI, the system processes behavioral, physiological, and contextual inputs (e.g., EEG, HRV, video, caregiver text) to provide tailored prompts, early warnings, and insights. Importantly, we acknowledge the substantial technical difficulty of fusing these heterogeneous data streams in real time on constrained devices: sampling-rate mismatches, synchronization and alignment across modalities, noise/missingness, and strict latency/memory budgets. Our architecture addresses these constraints via parameter-efficient fine-tuning and quantization for on-device inference, adaptive sampling and late/early fusion strategies, time-code alignment with fail-safe degradation paths, and privacy-by-design federated updates. This paper outlines the system's architecture, scientific foundations via a comprehensive literature review, alignment with project goals, and validation through four diverse use case scenarios. We discuss risks and open issues—including edge performance trade-offs, bias, and governance—alongside future directions. LLM4Autism represents a pragmatic, GDPR-compliant step toward trustworthy, local AI for ASD care developed with stakeholder engagement.