Innovating responsibly: Getting AI systems into production

Difficult conversations aren’t just necessary in responsible AI — they’re the foundation of getting it right. In this panel, experts tackled the tough questions: How to drive innovation without compromising ethics, and how to ensure AI is both effective and trustworthy from the ground up. From addressing data privacy and bias to implementing robust monitoring and governance frameworks, this session included practical insights for innovators and teams on building AI systems that scale safely and responsibly.
Key speakers
- Moderator: George Mathew, Insight Partners Managing Director
- Speaker: Gaurab Bansal, Responsible Innovation Labs Executive Director
- Speaker: Chloé Bakalar, Meta Chief Ethicist for Generative AI
- Speaker: Sarah Bird, Microsoft Chief Product Officer of Responsible AI
These insights came from our ScaleUp:AI event in November 2024, an industry-leading global conference that features topics across technologies and industries. Watch the full session below:
Key takeaways
- Responsible AI is both a mindset and a methodology. Startups and enterprises need to align their teams, forecast risks, curate data, and continuously improve their AI systems.
- Embedding ethics within AI product development ensures accountability. Meta integrates ethical considerations from ideation to post-launch evaluations.
- Security and regulatory compliance are core to responsible AI. Microsoft emphasizes security as a key pillar, incorporating AI red teaming and governance models.
- Global AI regulation is evolving. The EU AI Act sets stringent compliance requirements, while the U.S. remains in a learning posture.
- Speed and responsibility must go hand in hand. Rapid AI advancements should be accompanied by purposeful, deliberate decision-making.
- AI alignment and oversight are ongoing challenges. Ensuring AI systems remain trustworthy requires adaptable governance structures and clear customization frameworks.
The evolution of responsible AI
“Responsible AI is both a mindset and a methodology,” said Bansal. “For startups, it’s about focusing on your use case.” Auditing, testing, and data curation were mentioned as crucial components. For startups in particular, Bansal emphasized the importance of considering their resource bandwidth compared to large enterprises like Microsoft or Meta.
“Responsible AI is both a mindset and a methodology.”
Bakalar outlined Meta’s approach: “The gamble that we took as Meta was to embed ethics within the entire product development process, so from ideation all the way through launch, and then all the post-launch evals.”
Security, trust, and regulatory shifts
Responsible AI goes beyond ethics — it’s also about security and regulatory compliance. Bird highlighted the shift: “With generative AI, we’re seeing new types of attacks, like prompt injection attacks coming through the user flow or through the data. And so we’re starting now to see a lot more intermingling between how we need to push the boundaries in security and what we need to do for AI.”
Regulation is another factor. “The EU AI Act is coming into force,” Bansal noted. “Congress has been in a learning posture, but looking ahead to next year, they’ll be very focused on tax…from just like a mind share perspective, it’s not clear to me that they will be focused on AI, absent a crisis.”
“If you’re a smaller organization and you’re just starting, a lot of this is easier if you’re doing it from scratch.”
Bird added, “If you’re a smaller organization and you’re just starting, a lot of this is easier if you’re doing it from scratch. So if you, for example, need to govern your data, invest in a solution right now when you have a small amount of data. Then you’ll continue with that practice as you go. If you have a large amount of data, and you have to go find all of it in the company and actually build it into a new governance structure — that is much, much harder.
“Speed by itself isn’t a problem”
With the rapid evolution of AI, is the industry moving too fast? Bakalar offered a perspective: “Speed isn’t by itself a problem. When we think about the incredible opportunities that these technologies bring, the positive value that they can add…I’m now thinking specifically about the Global South…For them, time is important.”
Bird reinforced this point: “Our responsible AI practice now is almost 10 years old, and it’s been a journey. When ChatGPT burst onto the scene, that was the first time [people] really thought about this technology, and so it feels crazy fast.”
Preparing for AI’s future challenges
Looking ahead, the conversation around AI alignment and superintelligence is intensifying. Bakalar challenged common assumptions: “I think that focusing too much on that side of the conversation means that we tend to divert attention and focus away from the work that needs to be happening right now.”
Bird emphasized the need for human oversight: “This journey of what is the right role of the human in the task and the governance is something we’re already working on, and we’re going to continue to have to advance.”
“Customers will demand this of you”
Wrapping up the discussion, panelists shared their top recommendations:
- “Start having the conversation about how your organization should do it,” urged Bird. “Start testing your systems.”
- “Have hard conversations. Have them early. Have them often,” said Bakalar. “Be thoughtful about who’s in the room to help answer [questions], especially for these really thorny areas.”
- “Put together your talk track or deck on what your [responsible AI] practices are,” Bansal advised. “I find you in the hall today, and I’m like, ‘What’s your responsible AI program?’ I hope there’s not three minutes of silence.”
- “Customers will demand this of you,” Bird concluded. “This is not optional.”
Watch more sessions from ScaleUp:AI, and see scaleup.events for updates on ScaleUp:AI 2025.