An Interview With Chad Silverstein
“A bug is never just a mistake. It represents something bigger. An error of thinking that makes you who you are.”
As a part of this series, we had the pleasure to interview Matteo Senardi.
Matteo Senardi, Head of Data & AI at Docsity, leads the company’s data, analytics, and artificial intelligence strategy, building cloud-native data foundations and AI-powered learning products for a global student audience. Recently this includes Docsity AI, designed to transform learning materials into effective study resources through the use of AI. With more than a decade of experience across data engineering, cloud, and applied machine learning, Matteo is a technical leader focused on translating data into scalable, impactful solutions.
Thank you so much for doing this with us! To set the stage, tell us briefly about your childhood and background.
I am based in Turin, and my path started with an engineering mindset very early in Modena while I was attempting to Jailbreak iPhones during my studies. I studied Computer Engineering (both B.Sc. and M.Sc.) in Modena, where I focused on visual analytics, then moved into data engineering and applied machine learning. Over the last 10+ years, I have worked across data, cloud, and AI, from building Big Data architectures to leading cross-functional teams. Since 2020, I have led Data & Analytics/Innovation at Docsity, where we built cloud-native data foundations and AI product capabilities for a global student audience. One of the outcomes has been Docsity AI, a set of active learning features built around summaries, concept maps, quizzes, and document chat. Since January 2025, I have also led Data & AI at Gruppo Dylog, driving practical copilots and AI-powered operations. My background is technical, but my focus is always measurable business value.
What were the early challenges you faced in your career, and how did they shape your approach to leadership?
Early in my career, I learned that being technically correct is not enough if systems are hard to scale or decisions are hard to trust. I had to bridge messy, multi-source data with real business timelines, then explain tradeoffs to non-technical stakeholders. That shaped how I lead today. At Docsity, I pushed for robust GCP foundations with Terraform and Airflow before layering AI features, because reliability comes first. The same principle applies to funneling user behavior across websites and apps, and copilots: start with operational pain, define clear KPIs, then iterate with the team. Leadership for me means creating clarity, protecting engineering quality, and making AI useful in daily workflows, not just impressive in demos.
We often learn the most from our mistakes. Can you share one mistake that turned out to be one of the most valuable lessons you have learned?
One of my most valuable mistakes was assuming that a state-of-the-art benchmark would automatically fit our users. In AI summary generation, many best-practice setups target outputs around 30%-40% of the original text. We applied that logic early, and while the summaries looked technically strong, many Docsity students found them too short for effective study. That taught me a critical leadership lesson: quality is not just model quality, it is user-fit quality. We then ran server-side A/B tests on different summary lengths and measured learning usefulness signals to identify the best percentage for our audience. Since then, I never optimize for benchmark performance alone; I optimize for real user outcomes with human validation in the loop.
AI is a big leap for many businesses. When and what first sparked interest in incorporating it into your operations?
My interest in AI came from repeated exposure to document-heavy and decision-heavy workflows where humans were spending energy on extraction instead of reasoning. At Docsity, that became very concrete: students needed faster ways to digest complex materials, so we focused on AI features that transform uploaded documents into summaries, concept maps, quizzes, and interactive chat. The spark was practical, not theoretical; AI could compress time-to-understanding for learners at scale across a global community of millions of students. In parallel, in leadership roles, I saw similar patterns in business operations. That led to AI copilots for natural-language interaction and reconciliation workflows, always with human oversight so accuracy and accountability stay in the loop.
AI can be a game-changer for individuals and their responsibilities. Can you share how you personally use AI and what are your go-to resources or tools?
I use AI every day as an engineering and leadership accelerator, not as an autopilot. My go-to stack is a combination of cloud-native infrastructure (GCP, Azure, AWS), orchestration patterns with LangGraph, and LLMs for production-grade behavior. Practically, I use AI to structure exploration: compare architecture options, draft first-pass analyses, and pressure-test assumptions before team reviews. Then humans validate, refine, and decide. I also rely on the data platform layer I have built over time, with Terraform, Airflow, and BI governance, because AI quality depends on data quality. My rule is simple: use AI to reduce low-value effort and increase decision quality, while keeping explicit checkpoints where people own judgment and outcomes.
On the flip side, what challenges or setbacks have you encountered while implementing AI into your company?
The hardest part of AI implementation is rarely the model itself; it is the system around it. The main setbacks I have seen are inconsistent source data, unclear ownership, and unrealistic expectations about one prompt solving everything. At Docsity, user documents vary heavily in style and quality, so we had to design robust pipelines and validation logic before scaling AI features like summaries, maps or quiz generation. In other enterprise contexts, the challenge is similar: trust, governance, and integration with existing processes. My response is to enforce human-in-the-loop checkpoints, monitor outcomes, and roll out in controlled phases. When teams see that AI improves quality and speed without removing accountability, adoption becomes much smoother and more sustainable.
Let’s dig into this further. Can you share the top 5 AI tools or different ways you are integrating AI into your business? What specific functions do they serve and what kind of result have you seen so far? If you can, please share a story or example for each.
Our AI strategy is model-agnostic across flagship models like Google Gemini, OpenAI ChatGPT, Anthropic Claude, and open-source options like Meta Llama. We choose the model mix based on quality, latency, cost, and governance for each use case.
1. Docsity AI Summaries.
Function: distill long study documents into essentials. Result: faster revision cycles. Example: students upload dense notes and get a concise recap before exams.
2. Docsity AI Concept Maps.
Function: visualize key relationships. Result: better retention of complex topics. Example: one chapter becomes a navigable map for quick review.
3. Docsity AI Quiz Generation.
Function: create practice questions with solutions. Result: active learning instead of passive reading. Example: learners run self-tests right after upload.
4. Docsity Chat with Document.
Function: grounded Q&A on uploaded material. Result: clearer understanding on demand. Example: students ask follow-up questions on unclear sections.
5. AI Copilots.
Function: support AI driven workflows. Result: faster processing and better decisions. Example: natural-language copilots assist teams while humans approve outputs.
There is concern about AI taking over jobs. How do you balance AI tools with your human workforce and have you already replaced any positions using technology?
I understand the concern, but my operating model is augmentation, not replacement. I deploy AI to remove repetitive workload so people can focus on analysis, decision-making, and customer impact. In practice, that means AI drafts, classifies, summarizes, or suggests; humans validate, contextualize, and take responsibility for final actions. In the teams I lead, success is measured by better throughput and higher quality, not by reducing headcount. For example, in document-based and reconciliation workflows, AI reduces manual effort while domain experts handle edge cases and final judgment. This also improves learning inside the organization: people spend less time on mechanical tasks and more time building strategic and technical depth. That balance is non-negotiable for me.
Looking ahead, what is on the horizon in the world of AI that people should know about? What do you see happening in the next 3–5 years? I would love to hear your best prediction.
I expect AI to move from isolated assistants to orchestrated, domain-specific agents swarmed and connected to real business systems. The winning pattern will be hybrid: deterministic data pipelines plus LLM-based reasoning, wrapped in governance and observability. In education, tools like document chat, adaptive summaries, and practice generation will become more personalized and context-aware. In enterprise operations, copilots will increasingly support end-to-end workflows, but with explicit human approval gates for sensitive decisions. I also expect stronger emphasis on interoperability across cloud environments, since most serious organizations run multi-cloud reality. My prediction: the biggest value will come from teams that operationalize AI with measurable KPIs, clear ownership, and disciplined feedback loops.
If you had to pick just one AI tool that you feel is essential, one that you have not mentioned yet, which would it be and why?
I would choose LangGraph. What makes it powerful for production copilots is the ability to orchestrate stateful agent workflows with strong control, including human question checkpoints, instead of relying on a single prompt-response loop. In practice, this means a copilot can coordinate different automations for active learning: generate a summary, build a concept map, create a quiz, and then open follow-up document chat while preserving context across steps. That is the difference between a generic assistant and a real learning copilot. For me, LangGraph is essential because it turns AI interactions into reliable, multi-step workflows students can actually use every day.
For the uninitiated, what advice would you give someone looking to integrate AI into their business and does not know where to start?
My advice is to start narrow, measurable, and human-centered. Pick one workflow with obvious friction, like long document analysis or repetitive steps, and run a focused pilot. Define baseline metrics first (time, quality, error rate), then test an AI-assisted flow with clear human checkpoints. Keep architecture simple at the start, but do not skip data hygiene and governance; weak inputs will sink any model. Involve the people who do the work daily, because adoption depends on trust and usability, not just model quality. Most importantly, evolve the project through continuous tests reviewed with domain experts, and use that feedback to improve each release cycle. Finally, scale only after you can prove value on one use case. AI implementation is not a one-time launch; it is a continuous improvement cycle led by business outcomes.
Where can our readers follow you to learn more about leveraging AI in the business world?
The best place to follow my work is LinkedIn, where I share practical info on data platforms, AI, and implementation patterns across education and enterprise contexts. I focus on what works in production: cloud architecture decisions, human-in-the-loop AI design, and leadership approaches for cross-functional teams. If you want to see technical projects and open-source work, you can also check my GitHub. I am always interested in connecting with people building useful AI systems, especially where measurable value, governance, and product impact need to coexist. LinkedIn: https://www.linkedin.com/in/matteosenardi/ GitHub: https://github.com/pualien
Thank you so much for doing this with us!
About The Interviewer: Chad Silverstein is a seasoned entrepreneur with 25+ years of experience as a Founder and CEO. While attending Ohio State University, he launched his first company, Choice Recovery, Inc., a nationally recognized healthcare collection agency — twice ranked the #1 workplace in Ohio. In 2013, he founded [re]start, helping thousands of people find meaningful career opportunities. After selling both companies, Chad shifted his focus to his true passion — leadership. Today, he coaches founders and CEOs at Built to Lead, advises Authority Magazine’s Thought Leader Incubator.
Matteo Senardi of Docsity: How We Leveraged AI To Take Our Company To The Next Level was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.
