top of page

More than a toolkit: How businesses actually implement AI successfully

  • Writer: PANTA
    PANTA
  • Sep 15
  • 3 min read

Updated: Sep 16

Post-event report on the webinar at the Hamburg Chamber of Commerce (11 September 2025), in collaboration with the "Akademie für Beruf und Karriere" (ABK).


Two presenters in front of the Hamburg Chamber of Commerce; logos of the Academy for Beruf & Karriere and PANTA Upskilling - visual for the joint webinar.

On 11 September 2025, we hosted the webinar “More than a Toolkit: How businesses actually implement AI successfully" at the Hamburg Chamber of Commerce together with the Academy for Beruf und Karriere (ABK). Moderated by Arian Okhovat Alavian (PANTA) and Urs-Johann Theissen (ABK), the session made one thing clear: the success of AI doesn’t hinge on the tools themselves, but on how organizations embed them.


The key message


AI only becomes effective in a company once strategy and use cases provide the framework, guardrails create safety, and a rhythm of learning takes hold. It’s equally crucial to take the team’s common concerns seriously, from fear of mistakes to the question of how to use newly freed-up time and to build internal champions who spread knowledge across the organization.


What follows is a summary of the webinar’s key takeaways:


Strategy & Use Cases First

AI becomes sustainable when it’s integrated into day-to-day work. Not as an add-on, but as part of processes and goals. In practice, this means: first set the direction (where can AI deliver demonstrable value this quarter?), then select one or two use cases that provide real relief, think across departments, and make successes visible. As a change framework, ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement) has proven effective: a shared vocabulary that helps leaders and teams plan change in a structured way.


Guidelines & Governance Provide Assurance

Many hesitations boil down to standard questions: “Am I allowed to use AI?”, “Which tools are permitted?”, “Who checks privacy and quality?” Our recommendation: at the start, a short mini-policy (2–3 do’s & don’ts, e.g., privacy and human review), clear accountabilities (who approves, who reviews), and defined roles (leadership sets the cadence, employees co-create, HR/Compliance as guardrails). This weeds out shadow IT and makes discussions more objective.


Critically Important: Address Fears, including the question of “freed-up time”

Without clear orientation, tool proliferation and ad-hoc rules amplify uncertainty: fear of mistakes, job concerns, and loss of trust after faulty outputs. Guidelines and transparent processes reduce pressure and create space to reinvest freed-up time sensibly (quality assurance, customer contact, onboarding, documentation, targeted learning time). It’s important to manage this reinvestment deliberately and make it visible on a regular basis.


Learning as a Process, Not an Event

AI is constantly evolving; learning has to keep pace. What has proven effective in projects: a monthly office hour, a “use case of the week,” short how-tos/FAQs, internal showcases, and a central knowledge base with guidelines and examples. In addition, internal learning paths (an internal “AI Academy,” mandatory and advanced trainings, learning on demand) and external impulses from universities, associations, and networks help. Since February 2025, AI literacy has also been a regulatory requirement.


Format Portfolio for Engagement and Momentum

Broad participation emerges when learning is tangible: lunch-and-learns, hackathons, idea challenges, and peer learning lower barriers and bring the topic into everyday work. Communities of practice help good solutions spread; internal AI champions make successes visible. Externally, conferences, association work (e.g., KI.NRW, Bitkom, BVDW), and targeted coaching keep capabilities current.


And the most promising lever: AI ambassadors

Once strategy, use cases, guardrails, and a learning cadence are in place, ambassadors become the lever to scale across the organization. One to two people per area, with expertise in their respective areas of work and keen to explain. They curate examples from their own context, answer day-to-day questions, and feed feedback back into governance and training. The principle is familiar from earlier O365/Teams rollouts (champions models). "Domain-native help" noticeably accelerates adoption. In Hamburg, this pattern is also visible at large companies, for example, at the Otto Group that has publicly reported on its internal AI multiplicators.



Concrete measures from the session, in brief:


  • Mini-policy & kick-off: briefly explain why AI matters now; 2-3 do’s & don’ts (e.g., privacy, human review); set up a help channel (Teams/Slack “AI Questions”).

  • Test quick wins: e.g., automate meeting notes, demo to the team, collect feedback, iterate.

  • Roles & processes: define permitted tools; clarify approval paths and review steps; appoint a governance team or AI ethics board.

  • Establish a learning cadence: office hour, “use case of the week,” short how-tos/FAQs, internal showcases, learning paths.

  • Scale participation: lunch-and-learns, hackathons, idea challenges, communities of practice, internal “AI champions.”

  • Plug into external networks: universities/ associations/ networks, conferences, external coaches.

  • Make successes visible: regularly track and share time savings, quality gains, and satisfaction.



The webinar showed: AI becomes effective when people, routines, and values evolve alongside it. Tools are only the beginning. PANTA and ABK focus precisely on that: clear guardrails, measurable quick wins, and learning formats that stick. In Hamburg, PANTA and ABK also offer in-person trainings (e.g., the “AI Expert” program).






 
 
bottom of page