I’m researching edge AI solutions for predictive maintenance in industrial IoT systems. What performance, bandwidth, or model optimization challenges have you encountered when running anomaly detection or failure prediction directly at the edge? How do you balance accuracy with real-time constraints?