Gemini AI vs. NotebookLM: 40 Key Differences

May 5, 2026
Scroll

Gemini AI vs. NotebookLM: 40 Key Differences

The Definitive Comparison · 2026 Edition

Quick Answer: What's the core difference?

Gemini AI is a general-purpose conversational agent designed to answer questions, generate content, and reason across almost any topic using its pre-trained knowledge. NotebookLM is a source-grounded research assistant that only answers based on documents you upload — it cannot access the internet or its own training data. Gemini is a Swiss Army knife; NotebookLM is a scalpel for your private corpus.

Users often confuse these two Google products because both use large language models. But choosing the wrong one leads to hallucinations (Gemini when you need strict grounding) or uselessness (NotebookLM when you need world knowledge). The 40 differences below cover architecture, memory, citations, pricing, workflow, and more.

In This Guide

01–08: Core Purpose & Architecture
09–16: Knowledge & Grounding
17–24: Memory, Context & Citations
25–32: Features & Integration
33–40: Pricing, Access & Use Cases

SECTION 1: CORE PURPOSE & ARCHITECTURE

01 – Conversational AI vs. Source-Grounded Research

What it is: Gemini is a generalist chatbot that answers any question using its parametric memory (training data). NotebookLM is a specialist that answers only from documents you provide — it does not use its own training knowledge.

Gemini is designed for open-domain dialogue: creative writing, coding, general Q&A, image analysis. It will attempt to answer even if it has no reliable information (risking hallucination). NotebookLM is designed for closed-domain research: you upload PDFs, Google Docs, websites, or slides, and the AI restricts its answers to those sources. It will refuse to answer questions not covered by your uploaded material.

Measured Impact: Use Gemini for brainstorming, writing assistance, or casual conversation. Use NotebookLM for contract analysis, academic literature reviews, or internal company documentation — where accuracy and source traceability are mandatory.

02 – Different Base Models Under the Hood

What it is: Gemini runs on Google's flagship Gemini 1.5 Pro/Flash models (1M-2M token context). NotebookLM currently uses a fine-tuned version of Gemini 1.5 Pro, but with retrieval-augmented generation (RAG) architecture built around your source library.

Both tools share DNA, but NotebookLM's RAG pipeline is heavily modified. When you ask a question, NotebookLM first performs semantic search across your uploaded documents, retrieves the most relevant passages, and then asks the LLM to answer based only on those passages. Gemini (in its default chat mode) does not perform this retrieval step unless you manually turn on "Google Search" grounding.

This architectural difference means NotebookLM is far less likely to hallucinate when answering questions within your sources. However, it is also much slower for first-token generation because of the retrieval phase.

03 – Pre-trained Knowledge vs. User-Supplied Corpus

What it is: Gemini knows about the world up to its training cut-off (currently April 2025 for Gemini 1.5). NotebookLM has no independent knowledge — it only knows what you upload.

If you ask Gemini "What is the capital of France?" it answers instantly because Paris is in its training data. If you ask NotebookLM the same question without first uploading a document containing that fact, it will say something like "I cannot answer based on the sources you have provided. Please upload a relevant document." This is not a limitation — it's a deliberate design to prevent hallucination in research settings.

Pro tip: NotebookLM's ignorance of world facts is a feature for compliance-sensitive work (legal, medical, finance). You can, however, upload a Wikipedia page or public PDF to give it that knowledge on demand.

04 – Live Web Search Capability

What it is: Gemini (free tier with limited daily quota) can access real-time Google Search results to answer current-events questions. NotebookLM has no internet search — all sources must be manually uploaded.

In Gemini, you can toggle "Google Search" or "Enable real-time information." The model then retrieves fresh search results and grounds its answer in them. NotebookLM has no such toggle. Even if you upload a document containing a URL, NotebookLM will not crawl that URL dynamically — you must save the page as a PDF or Google Doc and upload it manually. This makes NotebookLM unsuitable for current news or live data but excellent for static corpora where freshness is not required.

05 – Image, Video & Audio Input Support

What it is: Gemini natively processes images, video frames, and audio. NotebookLM's primary input is text documents (PDF, Google Docs, Slides, URLs as text), with very limited native image understanding — it extracts text via OCR but does not "see" visual content.

You can share a screenshot or a photo with Gemini and ask questions about what's in the image — identify objects, read text, interpret charts. NotebookLM will extract any embedded text from images (via OCR) but cannot describe visual elements beyond that. For audio, Gemini can transcribe and analyze spoken content; NotebookLM has no direct audio processing (you would need to transcribe first). This makes Gemini far more versatile for multimodal projects.

06 – Running Python Code & Data Analysis

What it is: Gemini has a built-in code interpreter (similar to ChatGPT's Advanced Data Analysis) that can execute Python, manipulate dataframes, and generate plots. NotebookLM has no code execution environment.

If you ask Gemini to calculate the moving average of a CSV column or visualize a dataset, it will write and run Python code in a sandbox, then return the output. NotebookLM can analyze data only by reading your uploaded spreadsheets as text and providing textual summaries — it cannot perform calculations, run statistical tests, or generate charts. For quantitative research, Gemini is the clear winner.

07 – Input Length Limits

What it is: Gemini supports up to 1 million tokens (for Pro) or 2 million tokens (Gemini 1.5) — enough for the entire text of the Lord of the Rings trilogy. NotebookLM supports up to roughly 500,000 tokens per notebook but handles sources as individual documents, not as a single giant context.

Gemini's huge context window allows you to paste entire books, codebases, or long research papers in a single turn and ask questions about the whole corpus. NotebookLM spreads the corpus across multiple uploaded files (max 100 per notebook, with each file up to roughly 200 pages). It retrieves relevant snippets rather than placing everything in the active context, which is more efficient for very large document collections but less suitable for tasks that require simultaneous awareness of the entire document (e.g., finding contradictions across two far-apart sections).

08 – Programmatic Access

What it is: Gemini is available via the Google AI Studio and Vertex AI APIs, allowing developers to build applications. NotebookLM has no public API as of 2026 — it is a human-facing research tool only.

You can integrate Gemini into your own software: chatbots, summarization pipelines, data extraction workflows. NotebookLM is strictly a web application (and limited mobile interface) designed for interactive use by researchers, students, and knowledge workers. There is no programmatic way to query a NotebookLM notebook or automate source ingestion. This makes Gemini the only choice for developers, while NotebookLM remains a personal productivity tool.

Implication: If you need to process thousands of documents automatically, build Gemini into your pipeline. If you are studying for an exam or reviewing a dozen contracts manually, NotebookLM's interface is superior.

SECTION 2: KNOWLEDGE & GROUNDING

09 – LLM-Generated Fabrications

What it is: Gemini (like all non-RAG LLMs) can confidently state false facts that "sound" correct. NotebookLM, when constrained to user sources, hallucinates at a much lower rate because it only has your documents to draw from.

Benchmarks show Gemini 1.5 Pro hallucinates in about 3–5% of long-form answers on factual topics (higher on obscure or recent events). NotebookLM with proper source coverage — i.e., the answer truly exists in your uploads — hallucinates below 1% because the RAG pipeline forces the model to ground every claim in a retrieved passage. However, if your uploaded documents themselves contain errors, NotebookLM faithfully repeats those errors — it does not fact-check.

Pro tip: For legal or medical work, NotebookLM is safer because every statement can be traced to an uploaded source. For general knowledge, Gemini is fine but verify critical facts.

10 – Recency of Built-in Knowledge

What it is: Gemini's training data cut-off is April 2025 (for Gemini 1.5) — it knows nothing about events after that date unless you enable Google Search grounding. NotebookLM has no cut-off because it has no built-in knowledge; you can upload documents from today, and it will answer based on them.

If you ask Gemini about a product announced in June 2025, it will either guess (hallucinate) or say it doesn't know. With web search enabled, it can retrieve recent information, but that uses a separate quota for paid users. NotebookLM, by contrast, can answer questions about a document published five minutes ago — as long as you upload it. This makes NotebookLM ideal for analyzing recent internal reports, meeting transcripts, or freshly published research.

11 – Traceability of Answers

What it is: NotebookLM always provides inline citations linking each claim to the specific source document and passage. Gemini (without manual grounding) does not cite sources; with web search grounding, it may provide search result links but not precise passage-level citations.

When NotebookLM generates an answer, every sentence or factual claim is accompanied by a clickable citation (e.g., "Source: annual_report_2025.pdf, p. 12"). You can instantly verify the original text. Gemini free tier offers no citation tracking; Gemini Advanced with "double-check response" can highlight statements and show related web content, but it is not as granular or reliable as NotebookLM's citation system. For audit-trail requirements, NotebookLM is the only choice.

→ NotebookLM citations survive exports to Google Docs: you get footnotes.
→ Gemini citations are ephemeral and not exportable.

12 – Conflicting Information in Your Documents

What it is: NotebookLM will explicitly surface contradictions when two uploaded sources disagree. Gemini does not track document-level conflicts because it has no permanent source library.

If you upload two PDFs where one says "the project budget is 10M"andtheothersays"10M"andtheothersays"12M," NotebookLM's answer will note the discrepancy and cite both sources. You can then ask follow-up questions to resolve it. Gemini, even with uploaded files (via its file upload feature), treats each upload as a separate conversation context without cross-source conflict detection. This makes NotebookLM superior for literature reviews, due diligence, or any scenario where source triangulation is required.

13 – Explicit Answer Certainty

What it is: NotebookLM provides no numerical confidence score but implicitly shows certainty through citation density. Gemini does not offer any confidence metric.

In NotebookLM, if an answer is based on a single weak passage, the citation is still present but the user can judge relevance. Gemini, by contrast, will state all answers with equal fluency regardless of whether the underlying information is secure or speculative. For high-stakes decisions, NotebookLM's requirement to cite sources allows you to manually assess the strength of evidence — a form of indirect confidence calibration.

14 – Integration with Public Databases

What it is: Gemini can be extended via Vertex AI Search to query private or public databases, including BigQuery, Salesforce, and Confluence. NotebookLM has no external database connectors.

Enterprise users can configure Gemini to retrieve information from live databases using natural language (e.g., "Show me sales for Q2 2026"). NotebookLM is limited to static documents you manually upload. It cannot query a SQL database, a CRM, or an API endpoint. This makes Gemini the only option for knowledge management systems that require real-time data from structured sources.

15 – Multilingual Capability

What it is: Gemini supports over 100 languages natively, including low-resource ones. NotebookLM works well with major languages (English, Spanish, Japanese, etc.) but its retrieval quality degrades for languages with less training data.

Gemini is fluent in most major and many minor languages; you can converse, translate, or analyze text in virtually any script. NotebookLM's underlying Gemini model is multilingual, but the retrieval pipeline (semantic search on your uploaded documents) uses embeddings that perform best on languages with large pre-training corpora. For Polish, German, French, or Chinese, both tools are fine. For Quechua or Tigrinya, only Gemini is reliable.

16 – Fine-tuning for Specific Fields

What it is: Gemini can be fine-tuned on domain-specific data via Vertex AI. NotebookLM does not offer model fine-tuning — you are limited to RAG over your uploaded sources.

If you run a radiology clinic, you could fine-tune Gemini on thousands of annotated X-ray reports to improve its medical terminology handling. NotebookLM cannot be fine-tuned; it relies solely on the base model's language understanding plus your uploaded documents. For most research use cases, RAG is sufficient. For deep domain adaptation where retrieval alone isn't enough, fine-tuned Gemini is more powerful.

SECTION 3: MEMORY, CONTEXT & CITATIONS

17 – Long-term Recall Across Sessions

What it is: Gemini (via Google account) can remember preferences and information across conversations if you enable "Memory" (opt-in feature). NotebookLM has no cross-session memory — each notebook is isolated, and the AI does not remember prior conversations.

With Gemini memory on, you can say "I'm vegan" and months later ask "recommend a protein-rich dinner" and Gemini will recall your dietary preference. NotebookLM treats every chat inside a notebook as independent; there is no persistent memory of your previous statements. For research projects, this is fine because you want a clean slate. For personal assistance, Gemini's memory is a major advantage.

18 – How Much History Can It Remember in One Chat?

What it is: Gemini supports up to 1M tokens of conversation context (including previous turns). NotebookLM also has a large context window, but each answer depends on retrieved sources, not just raw chat history.

You can have an hour-long conversation with Gemini, and it will remember details from the beginning. NotebookLM, within a notebook, also retains the chat history (subject to token limits). However, because NotebookLM re-retrieves sources for each question, it may not be aware of deductions you made earlier unless you explicitly restate them. For complex discursive research, Gemini's pure autoregressive memory is more coherent; for Q&A over fixed documents, NotebookLM's retrieval-first approach is fine.

19 – Organizing Knowledge into Projects

What it is: NotebookLM is built around "notebooks" — each containing up to 100 sources (documents, links, Google Docs). You can create multiple notebooks for different projects. Gemini has no native project or folder organization; your chats are linear threads in the history.

With NotebookLM, you can maintain a notebook for "Q3 Financial Reports," another for "Literature Review on CRISPR," and switch between them without mixing sources. Each notebook has its own source library and independent chat history. Gemini's chat interface is a single stream; you can upload files per conversation, but you cannot easily segment knowledge domains. For power researchers juggling multiple unrelated topics, NotebookLM's notebook model is superior.

→ Notebooks can be shared with collaborators (unlike Gemini chats).
→ You can also export an entire notebook (sources + Q&A) as a Google Doc.

20 – Handling Version Changes

What it is: NotebookLM allows you to replace or delete sources in a notebook; the AI will adapt its future answers. Gemini, once a file is uploaded in a conversation, cannot be changed or removed without starting a new chat.

If you're analyzing a draft contract and the client sends an updated version, you can delete the old PDF in NotebookLM and upload the new one. All future Q&A will use the new version. Gemini treats each uploaded file as static; to switch to a new version, you must start a fresh conversation and re-upload. NotebookLM's mutable source library is a major workflow advantage for iterative projects.

21 – Multi-User Access

What it is: NotebookLM notebooks can be shared with other Google accounts, allowing team members to view the same sources and chat history. Gemini chats are private to your account (no native collaboration).

A research team can share a single NotebookLM notebook: everyone uploads sources, asks questions, and sees the same citations. This enables synchronous literature review or due diligence. Gemini has no sharing feature — you can copy/paste chat content but not collaborate in real time. For enterprise knowledge management, NotebookLM with Google Workspace sharing is clearly superior.

22 – Extracting Work Products

What it is: NotebookLM can export a notebook's entire Q&A history as a Google Doc, including inline citations. Gemini allows copying individual responses but no structured export of a conversation.

When you finish analyzing 20 documents in NotebookLM, you can click "Export to Docs" and get a comprehensive report with every answer and source reference preserved. This is invaluable for creating meeting minutes, research summaries, or audit trails. Gemini export is manual and error-prone — you must copy each answer separately, losing citation context. For professional documentation, NotebookLM wins.

23 – Generated Conversations for Listening (Podcast Mode)

What it is: NotebookLM's unique "Audio Overview" feature turns your sources into a simulated two-host podcast discussion, generated by AI voices. Gemini has no equivalent.

NotebookLM can create a 10-20 minute audio file where two AI hosts discuss your uploaded documents, highlight key points, and even disagree playfully. This is a powerful way to digest research while driving or exercising. Gemini's text-to-speech can read responses aloud, but it does not generate a synthesized discussion. This feature alone makes NotebookLM appealing for auditory learners and busy professionals.

Example: Upload a textbook chapter → get a podcast that explains the concepts conversationally.

24 – Sentence-level vs. Document-level References

What it is: NotebookLM provides sentence- or clause-level citations (every factual claim is linked to a specific passage). Gemini, even with web grounding, gives document-level URLs — not precise locations.

When NotebookLM says "Revenue grew 15% in Q2," that sentence is a clickable link that takes you to the exact line in your uploaded spreadsheet or PDF. Gemini, when you ask a question with web search, might show "Source: example.com" at the bottom, but you have to scroll and find the relevant sentence yourself. For rigorous fact-checking, NotebookLM's granularity saves hours.

SECTION 4: FEATURES & INTEGRATION

25 – Connecting to Third-Party Services

What it is: Gemini supports extensions for Google Workspace (Gmail, Drive, Docs, Calendar), YouTube, Maps, Flights, Hotels, and soon third-party apps. NotebookLM has no extensions — it only reads your uploaded files.

With Gemini, you can say "Summarize my unread emails," "Find flights to Tokyo next week," or "Show me the agenda from today's Drive document" — and it will directly interact with those services. NotebookLM cannot access your calendar, email, or any external app. It is a pure document analysis tool. For personal productivity automation, Gemini is vastly more powerful.

26 – Creating Visuals from Text

What it is: Gemini (via Imagen 3 integration) can generate images, edit photos, and create diagrams. NotebookLM has no image generation capability.

Ask Gemini "Generate a diagram of the water cycle" and it produces a labelled image. NotebookLM can only describe diagrams in text based on uploaded documents. For content creators, marketers, or educators, Gemini's image generation is a major differentiator. NotebookLM remains text-only (except for the podcast audio feature).

27 – Voice Interaction

What it is: Both tools support voice input (speech-to-text) and text-to-speech on mobile and web. However, Gemini's voice mode is more conversational (live streaming), while NotebookLM's voice is simple dictation.

Gemini's Live voice mode (available on mobile) allows natural back-and-forth conversation with interruptions, like talking to a human. NotebookLM's voice input transcribes your speech into text, then responds with written text (which can be read aloud via standard TTS). For hands-free, research-oriented Q&A (e.g., while cooking), NotebookLM is fine; for fluid conversation, Gemini is superior.

28 – Analyzing Video Content

What it is: Gemini can process YouTube video transcripts (and with multimodal, visual frames) to answer questions about the video. NotebookLM can also ingest a YouTube video by pasting the URL — it will read the transcript but not the video frames.

In Gemini, you can ask "What color shirt was the presenter wearing at 2:30?" (visual frame analysis) or "Summarize the key points from this 30-minute lecture." NotebookLM extracts only the auto-generated subtitles (if available), so it cannot answer visual questions. Both can summarize spoken content, but Gemini's multimodal advantage is clear.

29 – Working with Live Documents

What it is: NotebookLM can directly import Google Docs as sources and keep them in sync (if the Doc is updated, NotebookLM can re-ingest). Gemini can read a Doc via extension but does not maintain a persistent source relationship.

In NotebookLM, you add a Google Doc link as a source; later, when you edit the Doc, NotebookLM detects the change and can refresh. Gemini's Workspace extension can read a Doc on request but does not "subscribe" to it. For collaborative writing or evolving documents, NotebookLM's live sync is advantageous.

30 – Platform Availability

What it is: Both have web and mobile apps (iOS/Android). NotebookLM's mobile app is primarily for voice input and reading results, with limited source management. Gemini's mobile app is fully featured, including camera input for visual Q&A.

On Gemini mobile, you can take a photo of a plant and ask "What species is this?" or scan a menu and ask "What's the lowest-calorie option?" NotebookLM mobile lacks camera integration — it can only upload existing files. For field research or everyday assistance, Gemini's mobile camera AI is unmatched.

31 – How Much Can You Use Each Tool for Free?

What it is: Gemini free tier: limited to roughly 60 requests per hour and 1,600 per day (varies by region). NotebookLM free tier: significantly higher limits — you can upload up to 100 sources per notebook and generate many queries without strict caps.

NotebookLM is currently more generous on the free plan because it's positioned as a research tool (Google wants to attract academics). Gemini's free tier is more restrictive, encouraging upgrades to Gemini Advanced. If you plan to process hundreds of queries per day on a tight budget, NotebookLM is the better choice, provided your use case fits its source-grounded paradigm.

Pro tip: Gemini Advanced (paid) removes most rate limits and adds 2M token context. NotebookLM has no paid tier as of 2026 — it's completely free with a Google account.

32 – Choosing Between Model Versions

What it is: Gemini allows you to switch between 1.5 Flash (fast, lower quality) and 1.5 Pro (slower, higher quality) in the web interface. NotebookLM uses a fixed model — no user-visible model selection.

Advanced users often want faster responses for simple tasks and deeper reasoning for complex ones. Gemini gives that control. NotebookLM abstracts the model away to keep the interface simple. For speed-critical workflows, Gemini with Flash is better; for simplicity, NotebookLM's one-model approach is fine.

SECTION 5: PRICING, ACCESS & USE CASES

33 – What You Get Without Paying

What it is: Gemini free: access to Gemini 1.5 Flash, limited 1.5 Pro requests, no code execution, web search limited. NotebookLM free: full access to all features (except maybe audio overview generation rate limits), up to 100 sources per notebook, unlimited notebooks.

NotebookLM's free tier is remarkably generous — you can build a massive research library without spending a dime. Gemini free is usable but deliberately capped to encourage upgrades. If you're a student or independent researcher, NotebookLM offers more value at zero cost. If you need real-time web search or image generation, Gemini free is still useful but limited.

34 – What You Get for Money

What it is: Gemini Advanced (via Google One AI Premium) costs $19.99/month (or regional equivalent) and unlocks Gemini 1.5 Pro with 2M context, priority access, code execution, and higher rate limits. NotebookLM has no paid plan — it remains free.

Gemini Advanced is aimed at professionals who need the best model for complex reasoning, large document analysis, or heavy usage. NotebookLM's lack of a paid tier means Google may introduce one in the future, but for now it's an exceptional free resource. For most research tasks, NotebookLM free is sufficient; for heavy API-style usage, Gemini Advanced or pay-as-you-go via Vertex AI is required.

35 – API Costs

What it is: Gemini is available via Google AI Studio and Vertex AI at pay-per-token rates (e.g., roughly $0.0025 per 1K input tokens for Flash). NotebookLM has no API.

If you're building an application that needs LLM capabilities, Gemini API is the only choice. NotebookLM is purely an end-user tool. API pricing for Gemini is highly competitive (often cheaper than OpenAI's GPT-4). For bulk document processing at scale, you can run Gemini API against your own RAG pipeline.

36 – Best for Academic Research

What it is: NotebookLM is far superior for literature reviews, annotated bibliographies, and extracting insights from dozens of PDF papers. Gemini is better for brainstorming new hypotheses or explaining concepts not in your source set.

A PhD student analyzing 50 papers on a niche topic should use NotebookLM: upload all PDFs, ask comparative questions, get citations, generate podcast summaries. For writing a new theoretical framework, Gemini can help by drawing on its broad world knowledge. In practice, many researchers use both: NotebookLM for source grounding, Gemini for creative expansion.

37 – Best for Legal & Compliance

What it is: NotebookLM is the clear winner because of its citation granularity and refusal to hallucinate beyond sources. Gemini's lack of source-level citations makes it risky for professional legal work.

Lawyers can upload a contract into NotebookLM and ask "What are the termination clauses?" — each answer links to the exact sentence in the PDF. They can then export the Q&A as a deliverable. Gemini's answers, even with file upload, cannot be traced to specific lines, creating liability. Use NotebookLM for any regulated industry where auditability is required.

38 – Best for Developers

What it is: Gemini is far better for code generation, debugging, and API access. NotebookLM has no code execution and limited understanding of code syntax (treats it as text).

You can ask Gemini to write a Python script, explain a complex algorithm, or run your code. NotebookLM will read your code files and explain them in plain English but cannot execute or debug. For software engineering, Gemini is the obvious choice.

39 – Best for Content Creators

What it is: Gemini excels at creative writing, ideation, and summarization of live web content. NotebookLM is overkill if you don't have a fixed source corpus.

A blogger researching a topic could use NotebookLM to digest 10 competitor articles (uploaded as PDFs) and then use Gemini to draft original content. For pure creative writing without source constraints, Gemini is faster and more flexible. NotebookLM shines when you need to extract and cite from specific materials.

40 – Best for Business Intelligence

What it is: NotebookLM is excellent for analyzing company internal documents (quarterly reports, board decks, customer feedback). Gemini with Workspace extensions can also query live data but lacks persistent source libraries.

A product manager can upload customer support transcripts, feature requests, and competitive analysis PDFs into one notebook, then ask "What are the top three customer pain points?" — receiving answers with citations. Gemini would require re-uploading files each session. For recurring analysis of static document sets, NotebookLM is more efficient.

CONCLUSION: Don't Choose — Use Both Strategically

Gemini AI and NotebookLM are not competitors; they are complementary tools for different stages of knowledge work. Use NotebookLM when you need to interrogate a fixed, trusted set of documents with full citation traceability. Use Gemini when you need general world knowledge, multimodal understanding, real-time web search, or creative generation.

In a typical research workflow, you might start with Gemini to explore a topic and build a reading list. Then upload those PDFs to NotebookLM to extract precise insights. Finally, use Gemini again to synthesize those insights into a report or presentation. Having both in your toolkit unlocks a new level of AI-augmented productivity.

Which one will you use today?

If you're studying for an exam, analyzing a contract, or writing a literature review — open NotebookLM. If you need to code, generate images, or have a casual conversation — open Gemini. And if you're unsure, try both on the same question and observe the difference. The best teacher is experience.