Market research is an essential part of my day job developing business and product strategy, yet it can be incredibly time intensive. AI is transforming how we conduct research, accelerating insights while requiring human validation. I wanted to test whether AI could act as a true research partner—helping me brainstorm, structure, and analyze market insights in less time.

This post details how I worked with AI to develop a time-limited research-driven analysis of AI adoption in industrial automation. The actual Market Research report with specific prompts, responses, gotchas, data gaps with an evaluation is available here, and I recommend reading it first.

I only used public domain information for this experiment and ensured that none of my prompts had confidential information. Even though LLMs like ChatGPT have an opt-out setting to ensure that your content is not used for model training in their privacy policy, unless you are on a protected data environment, I remain cautious about the content I expose to LLMs.

For this experiment, I used ChatGPT’s 4o model—who I will refer to as Deeplyn, my virtual assistant. I “time boxed” the exercise to create the research report within two hours and gave myself an additional hour for finalizing this blog post. Without Deeplyn, this process would have taken at least 10 hours. AI accelerated multiple steps—here’s how:

TaskWorking Independently (estimate)Working with DeeplynTime Saved
Defining research scope1 hour10 minutes50 minutes
Gathering market data vs Prompt Whispering2 hours30 minutes1 hour 30 minutes
Distilling research to extract insights vs validating AI outputs2 hours20 minutes1 hour 40 minutes
Cross-checking references vs evaluating against a rubric2 hours15 minutes1 hour 45 minutes
Structuring insights & report writing3 hours45 minutes2 hours 15 minutes
Total10 hours2 hours~8 hours saved

1. Defining the Research Objective

When I first approached this experiment, I knew I needed to extract structured, quantifiable insights that could be useful for decision-making. My familiarity with the industrial automation market segment was essential to validate Deeplyn’s responses. At the same time, this experiment would show whether AI could augment and streamline the thinking process of an experienced professional.

I started by outlining the key areas of research. After a short and productive brainstorming session with Deeplyn, I landed on five focus areas:

  • Market size and AI adoption trends
  • Competitive landscape and leading players
  • Workforce upskilling for AI-driven automation
  • Business impact of AI (ROI, efficiency, financial benefits)
  • Barriers and challenges in AI implementation

With these areas defined, the next step was to figure out how to extract meaningful insights with Deeplyn.

2. Refining and Iterating for Better Insights

I knew that if I simply asked Deeplyn broad, open-ended questions, I would get surface-level responses—useful for a starting point, but not nearly rigorous enough for meaningful analysis. I call this process ‘prompt whispering’—iteratively fine-tuning my AI prompts to get sharper insights.

For example, my initial query What are AI adoption trends in industrial automation?” returned vague trends. After tweaking it to “Summarize AI adoption trends (2022-2024) with market size ($), CAGR (%), investment trends ($), and regional breakdowns,” the response included structured financial data instead of just high-level insights (see the Market Research report for prompt details). This is also an example of why it is important to have some level of subject matter expertise to coax better responses.

Key Prompt Refinements:

  • I restructured prompts to demand numerical insights—market size, CAGR, company investments.
  • I requested AI to cite sources and flag uncertainty when data was unavailable.
  • I refined questions to focus on company-specific details rather than broad industry averages.

Even with these refinements, there was a fundamental limitation: AI cannot access paywalled industry reports. This reinforced the importance of customizing LLMs with proprietary data for deeper, more accurate insights.

3. Running AI-Powered Research & Evaluating Results

Once I refined my prompts, I ran AI-assisted research queries and evaluated the results. Some responses were well-structured and data-rich, while others still had gaps. Rather than accepting them at face value, I critically evaluated the responses using a structured rubric that I developed with Deeplyn:

CriteriaEvaluation Focus
Data CompletenessAre all key data points covered?
Source CredibilityAre sources authoritative (e.g., McKinsey, IDC)?
RelevanceDoes the response directly support the research goal?
QuantifiabilityAre figures presented in a structured, comparable format?
Depth & ContextDoes the response provide explanations beyond the numbers?

For fun, I asked Deeplyn to evaluate its own responses—but unsurprisingly, AI was a lenient grader! My independent assessment found gaps, particularly in company-level financials and workforce training data. In other cases, Deeplyn hallucinated a couple of sources that did not exist. AI responses were strong at summarizing existing information but lacked proprietary insights—again emphasizing the need for cross-checking AI-generated insights with real-world sources with a subject matter expert in the loop.

4. Writing the Final Report

With my research validated and refined, I structured the final report. I didn’t want this to read like a raw AI data dump—it needed to be engaging, structured, and actionable.

My approach to prompt for the report:

  • Identifying and confirming document sections
  • Presenting quantifiable market insights in an easily consumable format
  • Using comparative tables to break down key findings
  • Including evaluation scores from rubric
  • Critically analyzing AI-generated responses to highlight strengths and gaps

By doing this, the research evolved from a collection of AI-generated facts into a structured, insightful market analysis that informs business strategy.

5. Final Takeaways: Lessons Learned

This experiment reinforced several key takeaways about using AI for market research:

  • AI accelerates research, but human expertise is essential. AI is powerful at aggregating and structuring data, but validation is crucial.
  • Refining prompts is key. The first AI answer is rarely the best – iteration is necessary.
  • Caution is advised when sharing information with AI. Unless you are operating in a protected data environment, stay vigilant on what data you are sharing with LLMs during your interactions.
  • A structured evaluation framework prevents misleading conclusions. Without assessment, it’s easy to trust incomplete data.
  • AI vs. Traditional Research: AI summarizes publicly available data, while traditional research firms offer exclusive insights, direct surveys, and private datasets.

Based on this experience, here are some specific ways to accelerate market research:

  • Automate Competitive Analysis → Use AI to summarize competitor strategies, pricing, and positioning.
  • Enhance Trend Research → AI can scan thousands of reports quickly, structuring key insights.
  • Speed Up Data Compilation → Extract structured financials (market size, CAGR) into a ready-to-use format.

If you have not reviewed the Market Research report generated with Deeplyn, read it here.

AI has the potential to revolutionize market research, but only when used as part of a hybrid approach—where AI assists in data collection and synthesis, while human expertise ensures accuracy and strategic application. This experiment reinforced my thinking on responsibly integrating AI into the future of work. While it isn’t a replacement for traditional methods, it’s an invaluable tool for accelerating insights—so long as it’s used thoughtfully.

Similar Posts