AI Design

Design

Using AI in My Design Workflow: Experiments (so far!) and Lessons Learned

Jul 19, 2025

Over the past few months, I've been experimenting with different ways to integrate AI into my design process.

Rather than replacing core design thinking, I've found AI most valuable as an intelligent assistant that accelerates specific parts of my workflow.

This is particularly true for the time-consuming research and setup phases that often slow down creative momentum.

Working in cybersecurity design presents unique challenges: rapidly evolving technical landscapes, complex domain knowledge, and the need to design for highly technical users while maintaining security standards. Here's how I've been leveraging AI to navigate these challenges more efficiently.

AI as My Personal Domain Expert

The Challenge: Cloud security is a field where new threats, and terminologies emerge constantly. As a designer, I need to understand complex technical concepts quickly to make informed design decisions, but I don't always have the deep technical background that my target personas possess.

How I'm Using AI: I've essentially turned AI into my on-demand cloud security tutor. Instead of spending hours googling and piecing together information from multiple sources, I can get comprehensive explanations tailored to my role as a designer.

Recent Example: When working on a cloud security feature that needed to surface network configuration details, I used this prompt:

"I'm a UX designer working on a feature for a cloud security product. As part of the feature, I want to surface network-related details like IP addresses, subnets, etc. Help me understand what network configuration details a cloud asset possesses and the purpose of each detail from a security monitoring perspective."

The Result: I got a structured breakdown of network components (VPCs, subnets, security groups, route tables) with explanations of why each matters for security monitoring. This helped me design an interface that prioritized the most critical information and organized it in a way that matched how security analysts actually think about network configurations.

Before vs. After:

  • Before: 30-45 minutes of googling, reading documentation, then scheduling time with engineering to clarify nuances

  • After: 5-10 minutes getting comprehensive context, then focused discussions with engineering about product-specific implementations and constraints

Key Learning: The conversations with PMs and engineers became much more productive. Instead of basic education, we could focus on product-specific decisions, technical limitations, and user workflow optimizations.

AI for Rapid Downstream Integration Mockups

The Challenge: Security workflows often involve multiple other third-party tools, integrations and downstream workflows. When designing end-to-end experiences and also to effectively collaborate with PM counterparts, I need to visualize how our product fits within users' existing tool ecosystems, but creating detailed mockups of third-party tools can be time-consuming. Stitching together half baked screenshots, takes up time and doesn't always have the context you want.

Recent Experiment: During a workflow discussion with our PM, we realized we needed to show how a downstream Pull Request action would appear in a popular version control tool that many customers use. Rather than spending time researching and recreating that interface from scratch or finding screenshots, I tried a different approach.

The Process:

  1. Context Setting: Provided Figma Make with background about the integration

  2. Reference Input: Shared a screenshot from the web of the target tool's interface

  3. Specific Request: Asked for a mockup showing the exact code that would appear in the PR

  4. Rapid Iteration: Made quick adjustments to match what I needed

The Result: In about 10 minutes, I had a realistic-looking mockup that helped us visualize the integration and end to end workflow

Why This Worked:

  • Low stakes exploration: Perfect for early workflow discussions

  • Visual communication: Much clearer than abstract or textual descriptions

  • Rapid iteration: Could test multiple presentation formats quickly

  • Stakeholder alignment: Helped PM with their discussions with other PMs to visualize the end-to-end experience

When I Believe I would Use This Approach:

  • Early workflow exploration where visual context matters

  • Integrations with well-established UI patterns

  • Stakeholder presentations where realistic context helps

  • Rapid prototyping of downstream experiences

When I Won't Use This:

  • Final design deliverables (always create these myself)

  • Novel interface challenges that require original thinking

  • When accuracy of third-party interfaces is critical to user understanding

AI for Contextual Copy Generation

The Challenge: Cybersecurity interfaces are often copy-heavy. Error messages, status descriptions, summaries, and help text all need to be technically accurate while remaining accessible to users under stress. Writing realistic copy or finding relevant descriptions in technical documentations that reflects actual security scenarios can be time-consuming, especially when I need multiple variations.

How I'm Using AI: I leverage AI to generate contextual, realistic copy that helps stakeholders understand how the interface will feel in real-world scenarios. This is particularly valuable for technical descriptions and error/warning messages, where the accuracy, tone and specificity of messaging can significantly impact user response.

The Value This Adds:

  • Realistic prototypes: Stakeholders can better evaluate designs when they see real-world content (no longer "lorem ipsum")

  • Copy consistency: AI helps maintain similar tone and structure across different interface elements

  • Multiple options: Quick generation of alternatives helps find the right tone and level of detail

  • Technical accuracy: AI understands cybersecurity terminology and can maintain appropriate technical language

My Copy Generation Process:

  1. Define parameters: Specify tone, length, technical level, and user context

  2. Generate multiple options: Usually ask for 3-5 variations

  3. Iterate with context: Refine based on specific user scenarios

  4. Review for accuracy: Always verify technical details with subject matter experts

  5. Review with content writers: Before finalizing copy, review it with content writers

Important Limitations:

  • Brand voice consistency: AI-generated copy needs editing to match company voice and style

  • Regulatory compliance: Security products often have specific language requirements that need human review

  • Context gaps: AI doesn't know your specific product's architecture or user workflows

Best Practices I've Learned:

  • Start with AI, finish with humans: Use AI for initial copy generation, then refine based on user feedback

  • Provide rich context: The more specific the scenario, the better the copy quality

  • Create copy libraries: Save good AI-generated copy variations for reuse across projects

  • Validate: Always have domain experts review technical accuracy

Some Principles I'm Thinking About

Through these experiments, I'm thinking of some guiding principles for AI integration in design workflows:

  1. AI is for acceleration, not replacement (at least not yet!): AI is great at speeding up research, content generation, and rapid iteration at times. It doesn't replace design thinking, user empathy, or creative problem-solving.

  2. Context is Everything: The more specific context I provide about my role, the project, and the intended outcome, the more useful AI becomes. Generic prompts yield generic results. This is most definitely a limitation when you're working on confidential stuff. So try to learn from AI as much as you can, and continue to build your skills along the way.

  3. Validate: AI-generated insights, whether domain knowledge or copy, needs to be validated through traditional methods and processes before being converted into final outcomes.

  4. Document Your Experiments: I am trying to keep track of which AI works well for specific types of tasks, and what prompts gave better results. This, I'm hoping, helps me build a more effective toolkit over time.

  5. Maintain Critical Thinking: AI responses can be confident-sounding but wrong. I've learned to cross-reference important technical information.

The key insight from these experiments:

AI works best when it amplifies human capabilities rather than trying to replace them.

As designers, I truly believe our value lies in empathy, creative problem-solving, and strategic thinking. AI can help us spend more time on these uniquely human contributions by handling some of the more mechanical aspects of our workflow.

The insights and approaches shared in this article are based on my personal experiments and observations. AI capabilities and best practices continue to evolve rapidly, so what works today may change tomorrow.

Thank you for visiting my portfolio 🎨

Get in touch!

Thank you for visiting my portfolio 🎨

Get in touch!

.

divya.agarwal@utexas.edu

Thank you for visiting my portfolio 🎨

Get in touch!