6 Essential Best Practices for Using DinoAI Effectively
Through analysis of multiple enterprise implementations, we've identified key areas where clearer guidance can dramatically improve user success. This blog post distills those learnings into 6 essential best practices that will help you maximize your effectiveness with DinoAI.

Kaustav Mitra
Jun 27, 2025
·
6
min read
The Bottom Line Up Front
DinoAI isn't magic, but it becomes incredibly powerful when you provide clear context, configure it properly, and use its advanced features strategically. Success comes from understanding both its capabilities and how to communicate effectively with AI to get the results you need.
Important Note: Like all generative AI systems, DinoAI's responses are not deterministic. While highly accurate given proper input, AI-generated outputs typically require some refinement and validation. Don't expect perfect results from a single prompt—plan for an iterative process where you review, adjust, and improve the generated code to meet your exact requirements.
1. Provide Explicit Context - Don't Assume DinoAI "Knows" Your Business
The Challenge: Many teams expect DinoAI to automatically understand their internal business logic and generate tests for company-specific scenarios without providing adequate context.
The Solution: Always provide comprehensive background information about your business domain, data structure, and specific requirements.
Best Practice:
Include relevant business rules in your prompts
Explain domain-specific terminology
Provide sample data structures when requesting model generation
Reference existing documentation or standards
Example:
❌ Poor: "Generate tests for our revenue data"
✅ Better: "Generate tests for our e-commerce revenue data that should check for: daily revenue variance beyond 20%, non-zero revenue for completed order dates, and consistency between order_total and payment_received fields. Our business defines completed orders as status='shipped' AND payment_status='confirmed'."
2. Master the Art of Incremental Prompting
The Challenge: Users often try to accomplish complex tasks in a single prompt, leading to suboptimal results.
The Solution: Break complex requests into smaller, sequential steps and build upon previous outputs. This iterative approach is especially important because AI responses are probabilistic—each interaction helps refine and improve the results.
Why This Matters: Generative AI doesn't produce deterministic outputs. Even with identical prompts, you may get slightly different results each time. By working incrementally, you can guide the AI toward better outcomes and catch issues early before they compound in more complex generations.
Best Practice:
Start with a basic model or test structure
Refine and add complexity iteratively
Use previous outputs as context for follow-up requests
Validate each step before proceeding
Expect to make adjustments at each iteration—this is normal and expected with AI-generated code
Example Workflow:
"Create a basic customer dimension model"
"Add SCD Type 2 handling to the customer model you just created"
"Now add data quality tests for the customer model"
"Generate documentation for this customer model"
3. Configure .dinorules
for Consistent Standards
The Challenge: Teams often test without proper DinoAI rules in place, leading to inconsistent outputs that don't match their standards.
The Solution: Use the .dinorules
configuration file to define custom instructions and development standards for DinoAI, ensuring all AI-generated code follows your team's patterns.
Best Practice:
Create a
.dinorules
file in your repository root to establish project-specific rulesDefine coding standards, naming conventions, and architectural patterns
Set up rules for column naming, data types, and model structures
Include business-specific validation rules
The rules remain available across sessions and apply to all DinoAI interactions
Example .dinorules
Configuration:
Key Benefits:
Define project-specific rules that customize DinoAI's behavior to match your team's unique needs
Set technical standards and ensure DinoAI provides responses that align with your preferred methods
Enhance team consistency and establish consistent development practices across your entire analytics engineering team
4. Leverage .dinoprompts
for Reusable Prompt Libraries
The Challenge: Teams recreate the same complex prompts repeatedly, wasting time and losing institutional knowledge about what works.
The Solution: Use the .dinoprompts
file as your team's prompt library to store and reuse battle-tested prompts that understand your data development patterns.
Best Practice:
Store tailored prompts for analytics engineering workflows in a centralized library
Use variables for dynamic, situation-specific prompts
Share effective prompts across team members
Access proven prompts instantly instead of recreating them
Example .dinoprompts
Structure:
Available Variables for Dynamic Prompts:
{{ git.diff.withOriginDefaultBranch }}
- Includes the git diff between your current branch and the default branch{{ editor.currentFile.path }}
- Includes the file path of the current opened and selected file{{ editor.openFiles.path }}
- Includes the file path of all opened files
Key Benefits:
Time savings through instant access to proven prompts
Team knowledge sharing and distribution of effective prompts across your organization
Fast onboarding where new team members access best practices immediately
5. Use Context Management for Fine-Tuned Results
The Challenge: Users don't provide enough specific context about their files and project structure, leading to generic responses that don't fit their needs.
The Solution: Strategically use DinoAI's context features to help it understand your project, data warehouse, and preferences for more relevant and accurate results.
Best Practice:
Add individual files for targeted tasks
Add directories for broader patterns
Provide context about related models and dependencies
Include relevant schema information when working with data transformations
Context Strategy:
File Context: When working on a specific model, include related upstream and downstream models
Directory Context: When establishing patterns, include the entire folder to understand conventions
Schema Context: When creating new models, include relevant source table definitions
Business Context: Always explain the business purpose and expected data patterns
Example Context Usage:
Advanced Context Tips:
Combine file and directory context for comprehensive understanding
Use
.dinorules
for consistent standards as permanent context for all DinoAI interactionsProvide business logic explanations alongside technical requirements
Include examples of expected output when requesting complex transformations
6. Be Specific About Testing Requirements
The Challenge: Team members expect DinoAI to automatically generate complex business-specific tests without explicitly defining what constitutes "good" data for their use case.
The Solution: Clearly define your data quality expectations and provide specific test scenarios.
Best Practice:
Specify the types of tests you need (uniqueness, completeness, validity, consistency)
Provide examples of what should pass/fail
Include business rules that data must satisfy
Define acceptable ranges and patterns
Reference your
.dinorules
for consistent test patterns
Example:
❌ Vague: "Add tests to this model"
✅ Specific: "Add the following tests to this revenue model: 1) Check that daily revenue variance doesn't exceed 20% from the 7-day rolling average, 2) Verify that completed_date is never in the future, 3) Ensure revenue_amount is positive for completed orders, 4) Validate that the sum of line_item_totals equals order_total. Follow our .dinorules
for test naming and structure."
Comprehensive Testing Strategy:
Data Quality Tests: Not null, unique, accepted values
Business Logic Tests: Custom validations for domain-specific rules
Relationship Tests: Foreign key constraints and referential integrity
Freshness Tests: Data recency for time-sensitive models
Volume Tests: Expected row counts and data distribution checks
Putting It All Together: The DinoAI Power User Workflow
The most effective DinoAI users combine all these practices into a systematic approach:
Set up your foundation with
.dinorules
for standards and.dinoprompts
for common tasksProvide rich context through file and directory attachments
Use incremental prompting to build complexity gradually
Be explicit about requirements especially for testing and business logic
Iterate and refine based on results, updating your rules and prompts
Example Power User Session:
Key Insight: Each step includes a review phase because AI-generated code, while highly capable, benefits from human oversight and refinement. This collaborative approach between human expertise and AI capability produces the best results.
Conclusion: From Good to Great
These 6 best practices transform DinoAI from a helpful tool into a powerful force multiplier for your analytics engineering team. By combining explicit context, strategic configuration, and systematic workflows, you'll achieve:
Faster development cycles with consistent, high-quality code generation
Reduced onboarding time as new team members leverage shared prompts and standards
Improved code quality through standardized testing and documentation practices
Enhanced team collaboration with shared knowledge and consistent patterns
Remember the AI Reality: DinoAI's power comes not from producing perfect code on the first try, but from dramatically accelerating your development process through intelligent assistance that you then refine and perfect. The most successful teams embrace the iterative nature of AI collaboration—using AI to generate solid foundations that they then review, test, and enhance to meet their exact specifications.
Think of DinoAI as an exceptionally skilled junior developer who produces high-quality work quickly but still benefits from senior oversight and guidance. This collaborative approach leverages the best of both human expertise and AI capability.
Ready to implement these practices? Start with setting up .dinorules
(practice 3) to establish your foundation, then gradually incorporate the other practices. Remember that AI-generated outputs are starting points that benefit from review and refinement—this iterative collaboration between you and DinoAI will unlock its full potential for your team.