TLDR
- Figma has temporarily disabled its AI-powered “Make Design” feature after accusations of copying Apple’s Weather app design.
- The controversy arose when developer Andy Allen demonstrated that the AI tool repeatedly produced designs similar to Apple’s app.
- Figma CEO Dylan Field denied that the tool was trained on Apple’s designs, blaming the issue on their own “bespoke design systems.”
- Figma uses off-the-shelf AI models from OpenAI and Amazon, raising questions about the training data for these models.
- The company plans to review its processes and improve the variability of designs before re-enabling the feature.
Figma, a popular design software company, has temporarily disabled its AI-powered “Make Design” feature following accusations that it was generating designs strikingly similar to Apple’s Weather app.
The controversy has sparked discussions about the ethical use of AI in design tools and the challenges of ensuring originality in AI-generated content.
The issue came to light when Andy Allen, CEO of Not Boring Software, demonstrated that Figma’s AI tool consistently produced designs closely resembling Apple’s Weather app when asked to create a weather application. Allen’s posts on social media platform X (formerly Twitter) quickly gained attention, raising concerns about potential legal issues for designers using the tool.
Figma AI looks rather heavily trained on existing apps.
This is a "weather app" using the new Make Designs feature and the results are basically Apple's Weather app (left). Tried three times, same results. https://t.co/Ij20OpPCer pic.twitter.com/psFTV6daVD
— Andy Allen (@asallen) July 1, 2024
In response to the growing controversy, Figma CEO Dylan Field addressed the situation in a series of posts on X. Field categorically denied that the “Make Design” feature was trained on Apple’s designs or any other app designs.
He stated, “the Make Design feature is not trained on Figma content, community files, or app designs,” and labeled the accusations of data training as false.
Field attributed the problem to Figma’s use of “bespoke underlying design systems” created by the company. He acknowledged that this approach resulted in low variability in the generated designs, which likely contributed to the similarities with Apple’s app.
Taking responsibility for the oversight, Field admitted, “Ultimately it is my fault for not insisting on a better QA process for this work and pushing our team hard to hit a deadline for Config,” referring to the company’s annual conference.
Figma’s CTO, Kris Rasmussen, provided further insight into the company’s AI implementation. He revealed that Figma uses off-the-shelf AI models, specifically OpenAI’s GPT-4 and Amazon’s Titan Image Generator G1, rather than training their own models. This revelation raises questions about the training data used by these third-party models and whether they might have incorporated designs from existing apps.
The company’s decision to use pre-trained models was partly due to its commitment to transparency regarding AI training policies. Figma recently introduced policies allowing users to opt in or out of having their content used for AI training, with a deadline of August 15 for users to make their choice.
As Figma works to address the issue, Rasmussen stated that the company is “doing a pass over the bespoke design system to ensure that it has sufficient variation and meets our quality standards.” He emphasized that this review is crucial before re-enabling the Make Design feature.
The controversy surrounding Figma’s AI tool highlights the broader challenges facing the creative industry as AI becomes more prevalent in design software. It underscores the need for careful consideration of training data, output variability, and ethical implications when implementing AI in creative tools.
This incident is not isolated in the world of AI-powered creative tools. Other companies, such as Adobe, have faced similar scrutiny regarding their AI features and data usage policies.
These situations emphasize the delicate balance between leveraging AI to enhance creativity and ensuring the originality and legal compliance of AI-generated content.