Luma AI opens Arabic-enabled image API across MENA - Communicate Online
Share

Luma AI opens Arabic-enabled image API across MENA

By Communicate Staff

|

Luma AI has opened access to its Uni-1.1 API, introducing production-grade image generation and natural-language editing tools with native Arabic-language rendering for developers and enterprises across the Gulf and wider Middle East and North Africa region.

The San Francisco-based company said the REST-based interface to its Unified Intelligence model is designed to support broadcasters, advertising agencies, streaming platforms and content studios seeking AI-powered visual production tools tailored to Arabic-language markets.

The launch comes as Gulf investment in artificial intelligence infrastructure and Arabic-language technology accelerates. Among Luma AI’s investors is HUMAIN, a Saudi artificial intelligence company backed by the Public Investment Fund.

Luma AI said Uni-1.1 arrives amid growing regional demand for AI tools capable of generating culturally-aware Arabic-language visuals, addressing what it described as a longstanding limitation among Western AI image models.

The company said the model was trained with artists across multiple cultural forms and is designed to produce cinematic lighting, material accuracy and regionally-aware imagery by default.

“Uni-1.1 was built to give developers and creative teams the tools to produce work that is not just technically correct, but genuinely beautiful — across any language, any culture, any market,” Amit Jain, chief executive and co-founder of Luma AI, said in a statement.

Unified model targets coherent image generation

Luma AI said Uni-1 differs from conventional generative AI systems that combine separate language and image models during inference.

Instead, the company said the model uses a decoder-only autoregressive transformer architecture in which text and image tokens share a single sequence, allowing the system to resolve structural and creative intent before image generation begins.

The company said the approach improves coherence in multi-constraint prompts and produces outputs intended to preserve composition, identity and style consistency.

Luma AI said Uni-1 currently ranks first in Human Preference Elo for overall style and editing and reference-based generation, while also leading the RISEBench benchmark on spatial reasoning.

The model is already deployed in production across companies including Envato, Comfy, Runware, Krea, Fal and LovArt.

Developers gain multilingual production workflows

The Uni-1.1 API is available through Python and JavaScript or TypeScript software development kits, with capabilities including text-to-image generation, reference-guided image generation and natural-language editing.

Luma AI said the system accepts up to nine reference inputs per request, enabling brands and studios to maintain consistent visual identity, character continuity and culturally-grounded imagery across campaigns and productions.

The company said developers can also modify backgrounds, lighting, composition and Arabic-language visuals using plain-language instructions without requiring complex prompt engineering.

Luma AI positioned the technology for “agent-native” workflows, allowing media teams and agencies to automate reference gathering, prompt enhancement and multilingual content production.

The company said the API is offered in two pricing tiers, Build and Scale, with usage-based billing and dedicated support for production workloads. It added that the service is available at less than half the price and latency of comparable models.