Skip to main content

.NET gRPC Service & REST Gateway Training Guide

Table of Contents

  1. Prerequisites
  2. Environment Setup
  3. JFrog Artifactory Configuration
  4. Generating .NET gRPC Services
  5. Local Development Workflow
  6. Testing Services Locally
  7. Generating REST Gateways
  8. Platform Deployment
  9. Testing Deployed Services
  10. Troubleshooting
  11. Quick Reference

Prerequisites

Required Software

  • .NET 9 SDK - Download from Microsoft
  • .NET 8 SDK - Some archetypes may require .NET 8 compatibility
  • Archetect - Template engine for service generation
  • Docker Desktop (if using CockroachDB persistence)
  • Git - Version control
  • Visual Studio Code or Visual Studio - IDE
  • Postman - API testing (gRPC support required)

Platform Access

  • JFrog Artifactory access with token generation permissions
  • GitHub repository access
  • ArgoCD access for deployment monitoring
  • Kubernetes cluster access (platform environment)

Knowledge Prerequisites

  • Basic .NET development experience
  • Familiarity with microservices concepts
  • Basic understanding of gRPC vs REST APIs
  • Command line/terminal usage

Environment Setup

1. Install .NET SDKs

macOS (using Homebrew):

brew install dotnet
# This typically installs the latest version (9.x)
# For .NET 8 compatibility, you may need both versions

Windows: Download and install from Microsoft .NET Downloads

Verify Installation:

dotnet --version
# Should show version 9.x.x

# Check all installed versions
dotnet --list-sdks
# Should show both 8.x.x and 9.x.x if you need .NET 8 compatibility

Note: While the documentation may reference .NET 8, most archetypes work with .NET 9. However, some may specifically require .NET 8 for compatibility. If you encounter build issues, verify you have the correct .NET version installed that matches the archetype requirements.

2. Install Archetect

Follow installation instructions from the Archetect documentation.

Verify Installation:

archetect --version

3. Set Up Shell Environment

Add the following to your shell profile (.zshrc, .bashrc, etc.):

# JFrog Configuration
export ARTIFACTORY_USERNAME="your-username"
export ARTIFACTORY_TOKEN="your-token"

JFrog Artifactory Configuration

1. Generate JFrog Token

Step-by-Step Process:

  1. Access Your JFrog Instance

    • Navigate to your organization's JFrog Artifactory URL (e.g., https://your-company.jfrog.io)
    • Log in with your corporate credentials
  2. Navigate to User Profile

    • Click your avatar/profile picture in the top-right corner
    • Select "Edit Profile" or "User Profile"
  3. Generate Access Token

    • Look for "Generate Token" or "Access Tokens" section
    • Click "Generate" to create a new token
    • Important: Copy both the username AND the generated token
    • Save these immediately - you won't be able to see the token again
  4. Token Permissions

    • Ensure the token has read access to your NuGet repositories
    • For most development work, the default permissions are sufficient

📖 Detailed Instructions: Internal JFrog Setup Guide

2. Configure Environment Variables

macOS (zsh shell - default on newer Macs):

# Edit your shell profile
nano ~/.zshrc

# Add these lines at the end
export ARTIFACTORY_USERNAME="your-actual-username"
export ARTIFACTORY_TOKEN="your-generated-token"

# Save and reload
source ~/.zshrc

# Verify variables are set
echo $ARTIFACTORY_USERNAME
echo $ARTIFACTORY_TOKEN

macOS (bash shell):

# Edit your shell profile  
nano ~/.bash_profile

# Add these lines at the end
export ARTIFACTORY_USERNAME="your-actual-username"
export ARTIFACTORY_TOKEN="your-generated-token"

# Save and reload
source ~/.bash_profile

Windows (PowerShell - Recommended):

# Set environment variables (permanent)
[Environment]::SetEnvironmentVariable("ARTIFACTORY_USERNAME", "your-actual-username", "User")
[Environment]::SetEnvironmentVariable("ARTIFACTORY_TOKEN", "your-generated-token", "User")

# Restart PowerShell/terminal after setting

# Verify variables are set
$env:ARTIFACTORY_USERNAME
$env:ARTIFACTORY_TOKEN

Windows (Command Prompt):

# Set environment variables (permanent)
setx ARTIFACTORY_USERNAME "your-actual-username"
setx ARTIFACTORY_TOKEN "your-generated-token"

# Restart Command Prompt after setting

# Verify variables are set
echo %ARTIFACTORY_USERNAME%
echo %ARTIFACTORY_TOKEN%

3. Update NuGet.config Files

After generating a service, you may need to update the NuGet.config file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear />
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
<add key="your-company" value="https://your-company.jfrog.io/artifactory/api/nuget/nuget/" />
</packageSources>
<packageSourceCredentials>
<your-company>
<add key="Username" value="%ARTIFACTORY_USERNAME%" />
<add key="ClearTextPassword" value="%ARTIFACTORY_TOKEN%" />
</your-company>
</packageSourceCredentials>
</configuration>

Key Points:

  • Replace your-company with your actual JFrog instance name
  • The credentials use environment variables for security
  • This file is automatically generated but may need the hostname updated

4. Verify Configuration

Test NuGet Package Restore:

# Navigate to your generated service
cd your-service-directory

# Try to restore packages
dotnet restore

# Success looks like:
# Determining projects to restore...
# Restored /path/to/project.csproj (in X ms).

Test Artifactory Connection:

# Test with curl (if available)
curl -u "$ARTIFACTORY_USERNAME:$ARTIFACTORY_TOKEN" \
"https://your-company.jfrog.io/artifactory/api/system/ping"

# Should return "OK"

Common Success Indicators:

  • dotnet restore completes without authentication errors
  • ✅ Packages download from both nuget.org and your artifactory
  • ✅ No 401 (Unauthorized) errors in build output
  • ✅ Build times are reasonable (not timing out on package downloads)

Generating .NET gRPC Services

1. Navigate to Your Workspace

cd /path/to/your/workspace
# Example: cd ~/development/a1p-apps

2. Run Archetect Service Generation

Option A: Using Answers File (Non-Interactive)

archetect render . /path/to/output --answer-file answers.yaml

Option B: Interactive Mode

archetect render https://github.com/your-org/dotnet-grpc-service-basic.archetype.git ./my-service

3. Service Generation Prompts

When prompted, provide the following information:

PromptExample ValueDescriptionNaming Guidelines
Org Namea1pYour organization identifierShort, lowercase, no spaces (e.g., acme, p6m)
Solution NameappsSolution/project group nameDescriptive group name (e.g., apps, services, platform)
Project PrefixvendorBusiness domain/function nameDomain-specific, camelCase converted (e.g., user, order, payment)
Project SuffixserviceProject type identifierUsually service, adapter, or orchestrator
PersistenceNone or CockroachDBDatabase optionSee persistence options below
Artifactory Hostyour-company.jfrog.ioJFrog instance URLYour organization's JFrog hostname

4. Persistence Options

🚫 None (Recommended for Training)

  • Pros: Simpler setup, no Docker required, faster build times
  • Cons: No database persistence, limited for real applications
  • Use When: Learning, prototyping, or stateless services

🐘 CockroachDB

  • Pros: Full persistence layer, production-ready, includes migrations
  • Cons: Requires Docker, more complex setup, integration tests need database
  • Use When: Production services that need data persistence

⚠️ Important: If you choose "None" but later see Entity Framework errors, you'll need to manually clean up database references in the code.

5. Example Service Generation

Scenario 1: Training/Learning Service (No Persistence)

# Interactive generation for learning
archetect render https://github.com/p6m-archetypes/dotnet-grpc-service-basic.archetype.git ./vendor-max-service

# When prompted, enter:
# Org Name: a1p
# Solution Name: apps
# Project Prefix: vendor
# Project Suffix: service
# Persistence: None ← Choose this for easier setup
# Artifactory Host: your-company.jfrog.io

Scenario 2: Production Service (With Database)

# Production service with persistence
archetect render https://github.com/p6m-archetypes/dotnet-grpc-service-basic.archetype.git ./user-auth-service

# When prompted, enter:
# Org Name: acme
# Solution Name: platform
# Project Prefix: user
# Project Suffix: service
# Persistence: CockroachDB ← Choose for production services
# Artifactory Host: acme.jfrog.io

Using Answers File (Non-Interactive)

# Create answers.yaml file first
echo "org-name: a1p
solution-name: apps
prefix-name: vendor
suffix-name: service" > answers.yaml

# Generate using answers file
archetect render . ./vendor-service --answer-file answers.yaml

6. Verify Successful Generation

Expected Generated Structure:

vendor-max-service/
├── VendorMaxService.API/ # gRPC service definitions
├── VendorMaxService.Client/ # Client library for consuming the service
├── VendorMaxService.Core/ # Business logic and domain models
├── VendorMaxService.IntegrationTests/ # Integration test suite
├── VendorMaxService.Persistence/ # Data access layer (if using database)
├── VendorMaxService.Server/ # gRPC server implementation
├── VendorMaxService.UnitTests/ # Unit test suite
├── VendorMaxService.sln # Visual Studio solution file
├── docker-compose.yml # Local development containers
├── Dockerfile # Container build instructions
├── NuGet.config # Package source configuration
└── README.md # Generated documentation

Verification Checklist:

  • All project folders were created
  • .sln file exists and can be opened in Visual Studio/VS Code
  • NuGet.config contains your artifactory host
  • README.md contains build and run instructions
  • Proto files exist in the .API project
  • No obvious generation errors in the terminal output

What Gets Customized:

  • Project names use your prefix/suffix (e.g., VendorMaxService)
  • Namespace names match your inputs
  • Proto service definitions use your naming
  • NuGet.config points to your artifactory
  • Docker configurations are pre-configured

Local Development Workflow

1. Initial Setup

# Navigate to your generated service
cd vendor-max-service

# Open in your preferred IDE
code . # VS Code
rider . # JetBrains Rider
devenv *.sln # Visual Studio (Windows)

2. Standard Build Process

Full Build Workflow:

# 1. Clean any previous builds
dotnet clean

# 2. Restore NuGet packages
dotnet restore

# 3. Build the entire solution
dotnet build

# 4. Run tests (optional)
dotnet test

Quick Development Commands:

# Build specific project
dotnet build VendorMaxService.Server

# Build in Release mode
dotnet build -c Release

# Build with verbose output (for troubleshooting)
dotnet build -v detailed

3. IDE-Specific Setup

Visual Studio Code:

# Install recommended extensions
# - C# Dev Kit
# - .NET Install Tool
# - REST Client (for testing)

# Open integrated terminal: Ctrl+` (backtick)
# Use Command Palette: Ctrl+Shift+P

Visual Studio (Windows):

  • Open the .sln file directly
  • Set VendorMaxService.Server as startup project
  • Use built-in NuGet Package Manager for dependencies
  • Built-in debugging tools available

JetBrains Rider:

  • Excellent for cross-platform development
  • Built-in terminal and Git integration
  • Advanced debugging capabilities

4. Running and Debugging

Start the Service Locally:

# Navigate to the server project
cd VendorMaxService.Server

# Run in development mode
dotnet run

# Run with specific environment
dotnet run --environment Development

# Run and watch for file changes (auto-restart)
dotnet watch run

Debug Configuration:

  • Service runs on localhost:5030 (gRPC)
  • Management endpoint on localhost:5031
  • Set breakpoints in your IDE
  • Use hot reload for faster development

5. Testing Strategies

Unit Tests:

# Run all unit tests
dotnet test VendorMaxService.UnitTests

# Run with detailed output
dotnet test --logger "console;verbosity=detailed"

# Run specific test
dotnet test --filter "TestMethodName"

# Run tests with coverage (requires coverage tools)
dotnet test --collect:"XPlat Code Coverage"

Integration Tests:

# Run integration tests (may require setup)
dotnet test VendorMaxService.IntegrationTests

# Skip integration tests if they're not configured
dotnet test --filter "Category!=Integration"

Handling "None" Persistence Issues: If you selected "None" for persistence but tests fail:

  1. Comment Out Integration Tests Temporarily:

    // [Fact]  // Comment out the attribute
    public async Task GetVendorMaxes_ReturnsExpectedData()
    {
    // Test implementation
    }
  2. Remove Database Startup Configuration: Edit Startup.cs and remove/comment:

    // services.AddDbContext<AppDbContext>(...);
    // app.UseDbMigration();
  3. Focus on Unit Tests:

    # Run only unit tests initially
    dotnet test VendorMaxService.UnitTests

6. Common Development Tasks

Adding New gRPC Methods:

  1. Update .proto file in the API project
  2. Build to regenerate C# classes
  3. Implement method in the Server project's gRPC service
  4. Add tests for the new functionality

Making Code Changes:

# After making changes, quick validation:
dotnet build # Ensure it compiles
dotnet test VendorMaxService.UnitTests # Run fast tests
dotnet run # Test locally

Working with Configuration:

  • appsettings.json - Base configuration
  • appsettings.Development.json - Development overrides
  • Environment variables override file settings

7. Debugging and Logging

Enable Detailed Logging:

// In appsettings.Development.json
{
"Logging": {
"LogLevel": {
"Default": "Debug",
"Grpc": "Debug"
}
}
}

Common Debugging Scenarios:

  • gRPC calls not working: Check port numbers and proto definitions
  • Authentication issues: Verify interceptors and JWT configuration
  • Database errors: Ensure connection strings and migrations
  • Package restore fails: Check JFrog configuration

IDE Debugging Features:

  • Set breakpoints in service methods
  • Inspect request/response objects
  • Use debug console for immediate evaluation
  • Step through gRPC interceptors

8. Performance and Monitoring

Local Performance Testing:

# Use tools like:
# - Postman for load testing
# - Apache Bench (ab) for simple HTTP load tests
# - Custom gRPC clients for stress testing

Health Checks:

  • Service health endpoint: /health
  • Check logs for startup issues
  • Monitor memory usage during development

9. Code Organization Best Practices

Project Structure Understanding:

  • API/ - gRPC service definitions (.proto files)
  • Core/ - Business logic, domain models, interfaces
  • Server/ - gRPC service implementations, startup configuration
  • Client/ - Generated client libraries for other services to use
  • Persistence/ - Data access, Entity Framework, repositories
  • UnitTests/ - Fast, isolated tests
  • IntegrationTests/ - Full system tests with dependencies

Development Workflow:

  1. Write/modify business logic in Core/
  2. Update gRPC contracts in API/
  3. Implement service methods in Server/
  4. Add tests in appropriate test projects
  5. Test locally with Postman or gRPC tools

Testing Services Locally

1. Start the Service

# Navigate to the Server project
cd VendorMaxService.Server

# Run the service
dotnet run

# Service will start on localhost:5030 (default gRPC port)

2. Test with Postman

  1. Create New gRPC Request in Postman
  2. Set URL: localhost:5030 (without http://)
  3. Import Service Definition: Postman will auto-discover via reflection
  4. Test Methods:
    • CreateVendorMax - Create a new entity
    • GetVendorMaxes - Retrieve entities

Example CreateVendorMax Request:

{
"name": "Test Vendor"
}

Example Response:

{
"id": "generated-id",
"name": "Test Vendor"
}

3. Verify Service Health

The service should respond to gRPC reflection queries and show available methods.


Generating REST Gateways

1. Understanding gRPC vs REST Gateway

Why Use REST Gateways?

  • gRPC Services: High-performance, binary protocol, ideal for service-to-service communication
  • REST Gateways: HTTP/JSON API, browser-friendly, easier for external integrations
  • Gateway Purpose: Exposes gRPC services as REST APIs for broader accessibility

Architecture:

External Client → REST Gateway → gRPC Service → Database
(HTTP) ↕ (gRPC)
Translation

2. Generate REST Gateway

Prerequisites:

  • Existing gRPC service already generated and working
  • Service name and structure understood

Generation Process:

# Navigate to your workspace (same level as gRPC service)
cd /path/to/your/workspace

# Use the tutorials example from documentation
archetect render https://github.com/your-org/dotnet-rest-gateway.archetype.git ./vendor-max-gateway

# Alternative: If catalog available
archetect # Then select .NET REST Service

Gateway Generation Prompts:

PromptExample ValueDescription
Org Namea1pSame as your gRPC service
Solution NameappsSame as your gRPC service
Service Integrationvendor-max-serviceName of gRPC service to expose
Gateway Prefixvendor-maxUsually matches the gRPC service prefix
Gateway SuffixgatewayTypically "gateway" or "api"

3. Service Integration Configuration

The gateway automatically:

  • References your gRPC service
  • Creates HTTP endpoints that map to gRPC methods
  • Handles request/response translation between JSON and Protobuf
  • Configures proper error handling and status codes

Example Integration:

// Generated gateway will include:
// POST /api/vendor-max → CreateVendorMax gRPC method
// GET /api/vendor-max → GetVendorMaxes gRPC method

4. Update Gateway Configuration

Fix Artifactory Configuration: Edit NuGet.config in the gateway project:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear />
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
<add key="your-company" value="https://your-company.jfrog.io/artifactory/api/nuget/nuget/" />
</packageSources>
</configuration>

Verify Service References: Check that the gateway project references your gRPC service:

<!-- In the gateway .csproj file -->
<ProjectReference Include="..\VendorMaxService.Client\VendorMaxService.Client.csproj" />

5. Build and Test Gateway

Build Process:

cd vendor-max-gateway

# Update NuGet config first if needed
# Then build
dotnet restore
dotnet build

Local Testing:

# Start the gRPC service first
cd ../vendor-max-service/VendorMaxService.Server
dotnet run # Runs on localhost:5030

# In another terminal, start the gateway
cd ../vendor-max-gateway
dotnet run # Typically runs on localhost:5000 or 5001

6. Test REST Gateway

Using curl:

# Test GET endpoint
curl -X GET http://localhost:5000/api/vendor-max

# Test POST endpoint
curl -X POST http://localhost:5000/api/vendor-max \
-H "Content-Type: application/json" \
-d '{"name": "Test Vendor"}'

Using Postman:

  1. Create new HTTP (not gRPC) request
  2. Set URL: http://localhost:5000/api/vendor-max
  3. For POST requests, set body to JSON:
    {
    "name": "Test Vendor"
    }

7. Gateway Deployment Notes

When deployed, the gateway creates:

  • HTTP routes accessible via domain URLs
  • Automatic HTTPS termination
  • Load balancing and scaling
  • Integration with platform authentication

Expected Deployment URLs:

https://vendor-max-gateway.your-domain.com/api/vendor-max

Platform Deployment

1. Initialize Git Repository

cd vendor-max-service

# Initialize git
git init -b main

# Add all files
git add .

# Initial commit
git commit -m "initial commit"

2. GitHub Repository Setup

Repository Naming Convention:

{org-name}-{solution-name}/{prefix-name}-{suffix-name}
# Example: a1p-apps/vendor-max-service

Create Repository (Method 1 - GitHub CLI):

# Create repository using GitHub CLI (Recommended)
gh repo create a1p-apps/vendor-max-service --public --source=. --remote=origin

# Push to repository
git push -u origin HEAD

Create Repository (Method 2 - Manual):

# If you don't have GitHub CLI:
# 1. Go to GitHub.com
# 2. Navigate to your organization (e.g., a1p-apps)
# 3. Click "New Repository"
# 4. Name: vendor-max-service
# 5. Keep it Public
# 6. Don't initialize with README (you already have one)
# 7. Create Repository

# Then add remote and push:
git remote add origin https://github.com/a1p-apps/vendor-max-service.git
git push -u origin main

Repository Requirements:

  • Public repository (required for platform integration)
  • Proper naming convention following org standards
  • Main branch as default (not master)
  • All generated files committed including Docker, NuGet configs

3. Monitor CI/CD Pipeline

  1. Check GitHub Actions - Verify the build pipeline starts
  2. Monitor Build Status - Ensure all checks pass
  3. Wait for Platform Sync - Services appear in platform directory

4. Platform Registration Process

Understanding the Flow:

GitHub Push → CI/CD Pipeline → Platform Directory → ArgoCD Application Creation

Step-by-Step Process:

  1. GitHub Actions Build - Your push triggers the CI/CD pipeline
  2. Platform Sync - Service appears in .platform directory
  3. ArgoCD Reconciliation - Platform creates ArgoCD application automatically
  4. Service Deployment - Pods and services are created in Kubernetes

⏱️ Timing Expectations:

  • CI/CD pipeline: 2-5 minutes
  • Platform sync: 1-2 minutes
  • ArgoCD reconciliation: 30 seconds - 2 minutes
  • Total time: ~5-10 minutes from push to running service

🔧 Manual Reconciliation (If Needed): If your service doesn't appear in ArgoCD after expected time:

  1. Access the platform reconciliation tools
  2. Manually trigger the sync process
  3. Check for any platform configuration issues

Testing Deployed Services

1. ArgoCD Dashboard Monitoring

Accessing ArgoCD:

  1. Navigate to your organization's ArgoCD dashboard
  2. Look for your application (e.g., vendor-max-service, vendor-max-gateway)
  3. Applications are typically named matching your service/gateway names

Application Status Indicators:

StatusIconMeaningAction Required
SyncedApplication matches desired stateNone - everything is good
OutOfSync⚠️Changes pending deploymentWait for auto-sync or manually sync
Healthy💚All resources are running properlyNone - service is operational
Degraded🔴Some resources have issuesInvestigate pod logs and events
Progressing🔄Deployment in progressWait for completion

Common Deployment Stages:

  1. Application Created - Shows in ArgoCD but may be empty
  2. Syncing - Resources being applied to cluster
  3. Image Pull - Container images being downloaded
  4. Pod Starting - Containers starting up
  5. Running - Service is fully operational

⚠️ Common Issues:

Image Pull Errors:

  • Symptom: Red status, "ImagePullBackOff" in pod details
  • Cause: Container registry authentication or image not found
  • Solution: Usually resolves automatically; check CI/CD built the image correctly

Resource Limits:

  • Symptom: Pods stuck in "Pending" state
  • Cause: Insufficient cluster resources
  • Solution: Contact platform team if persistent

2. Detailed Application Inspection

Viewing Application Details:

  1. Click on your application in ArgoCD
  2. Application View: Shows all Kubernetes resources (Deployments, Services, ConfigMaps)
  3. Tree View: Hierarchical view of resource relationships
  4. Network View: Service connectivity diagram

Key Resources to Check:

  • Deployment: Should show desired replicas running
  • Service: Network endpoint for your application
  • ConfigMap/Secret: Configuration data
  • Ingress/HTTPRoute: External access routes (for gateways)

Pod Logs and Debugging:

# Get pod status
kubectl get pods -l app=vendor-max-service

# View pod logs
kubectl logs -f deployment/vendor-max-service

# Describe pod for events
kubectl describe pod <pod-name>

3. Port Forward for Testing

Port Forwarding gRPC Services:

# Forward local port to gRPC service
kubectl port-forward svc/vendor-max-service 5030:80

# Service is now accessible at localhost:5030 for gRPC calls

Port Forwarding REST Gateways:

# Forward local port to REST gateway
kubectl port-forward svc/vendor-max-gateway 8080:80

# Gateway is now accessible at localhost:8080 for HTTP calls

Testing Commands:

# Test gRPC service (requires gRPC client)
grpcurl -plaintext localhost:5030 list

# Test REST gateway with curl
curl http://localhost:8080/api/vendor-max

4. Test with Postman

For gRPC Services (via port-forward):

  • Use gRPC request type
  • Connect to localhost:5030
  • Same configuration as local testing

For REST Gateways (via port-forward):

  • Use HTTP request type
  • Connect to localhost:8080
  • Test REST endpoints

5. Access via Domain URLs (Production)

HTTP Routes for REST Gateways: Once fully deployed, REST gateways are accessible via public URLs:

# Example domain URLs
https://vendor-max-gateway.your-domain.com/api/vendor-max

# These URLs are automatically generated when ArgoCD creates:
# - HTTPRoute resources (for external access)
# - Service resources (for internal access)
# - Ingress controllers handle HTTPS termination

Finding Your Service URLs:

  1. In ArgoCD: Look for HTTPRoute resources in your application
  2. Check Route Status: URLs appear in the HTTPRoute resource details
  3. Domain Pattern: Usually follows {service-name}.{org-domain}.com

Authentication Notes:

  • Production URLs may have authentication enabled
  • Some services might be behind a proxy for additional security
  • Check with platform team for specific authentication requirements

Comprehensive Testing & Validation Guide

1. Pre-Deployment Testing Checklist

Local Testing Requirements:

  • Service builds successfully (dotnet build)
  • Unit tests pass (dotnet test VendorMaxService.UnitTests)
  • Service starts locally (dotnet run)
  • gRPC endpoints respond (Postman/grpcurl testing)
  • Health endpoints accessible (/health)
  • Configuration valid (appsettings, environment variables)

2. Post-Deployment Validation

ArgoCD Application Health:

# Check application status in ArgoCD
# Should show: ✅ Synced, 💚 Healthy

# Key validation points:
# - All pods are Running (not Pending/Crashing)
# - Services are created and have endpoints
# - ConfigMaps and Secrets are applied
# - HTTPRoutes are created (for gateways)

Pod and Service Validation:

# Check pod status
kubectl get pods -l app=vendor-max-service
# Should show: Running status, 1/1 Ready

# Check service endpoints
kubectl get svc vendor-max-service
# Should show: ClusterIP assigned, correct ports

# Check logs for startup success
kubectl logs deployment/vendor-max-service
# Should show: Application started, listening on port 80

3. Functional Testing

gRPC Service Testing (via Port Forward):

# Port forward to service
kubectl port-forward svc/vendor-max-service 5030:80

# Test gRPC methods using grpcurl
grpcurl -plaintext localhost:5030 list
grpcurl -plaintext localhost:5030 \
vendormaxservice.VendorMaxService/GetVendorMaxes

# Test with Postman (gRPC mode)
# URL: localhost:5030
# Method: GetVendorMaxes, CreateVendorMax

REST Gateway Testing (via Port Forward):

# Port forward to gateway
kubectl port-forward svc/vendor-max-gateway 8080:80

# Test REST endpoints
curl -X GET http://localhost:8080/api/vendor-max
curl -X POST http://localhost:8080/api/vendor-max \
-H "Content-Type: application/json" \
-d '{"name": "Test Vendor"}'

Production URL Testing:

# Test public REST endpoints (when available)
curl -X GET https://vendor-max-gateway.your-domain.com/api/vendor-max

# Note: May require authentication headers
curl -X GET https://vendor-max-gateway.your-domain.com/api/vendor-max \
-H "Authorization: Bearer <token>"

4. Performance & Load Testing

Basic Performance Validation:

# Simple load testing with curl
for i in {1..10}; do
curl -X GET http://localhost:8080/api/vendor-max &
done
wait

# Check response times and error rates
# Monitor pod resource usage during tests
kubectl top pods -l app=vendor-max-service

Health Check Validation:

# Test health endpoints
kubectl port-forward svc/vendor-max-service 8081:81 # Management port
curl http://localhost:8081/health

# Expected response:
# {"status": "Healthy", "totalDuration": "00:00:00.123"}

5. Integration Testing

Service-to-Service Communication:

# If your service calls other services
# Verify network connectivity and service discovery

# Check service DNS resolution
kubectl exec -it deployment/vendor-max-service -- nslookup other-service

# Test internal service calls
kubectl logs deployment/vendor-max-service | grep "calling other-service"

Database Connectivity (if using CockroachDB):

# Check database connection from service
kubectl logs deployment/vendor-max-service | grep -i "database\|connection"

# Should see successful connection logs
# No connection timeout or authentication errors

6. Security & Authentication Testing

Authentication Testing (if enabled):

# Test without authentication (should fail)
curl -X GET https://vendor-max-gateway.your-domain.com/api/vendor-max
# Expected: 401 Unauthorized

# Test with valid token (should succeed)
curl -X GET https://vendor-max-gateway.your-domain.com/api/vendor-max \
-H "Authorization: Bearer <valid-token>"
# Expected: 200 OK with data

Network Security:

# Verify gRPC services are not directly accessible
curl -X GET https://vendor-max-service.your-domain.com/
# Should: Connection refused or 404 (good - not exposed)

# Only gateways should have public URLs

7. Monitoring & Observability

Log Analysis:

# Check for errors in logs
kubectl logs deployment/vendor-max-service | grep -i error

# Check for performance warnings
kubectl logs deployment/vendor-max-service | grep -i "slow\|timeout"

# Verify request logging
kubectl logs deployment/vendor-max-service | grep "HTTP\|gRPC"

Resource Monitoring:

# Check CPU and memory usage
kubectl top pods -l app=vendor-max-service

# Typical good values:
# CPU: <100m (0.1 CPU cores) for low traffic
# Memory: <256Mi for basic services

8. Deployment Validation Checklist

✅ Complete Validation Checklist:

  • ArgoCD Status: Synced and Healthy
  • Pods Running: All replicas in Running state
  • Services Created: ClusterIP assigned, correct ports
  • HTTPRoutes Active: (for gateways) Routes have valid status
  • Health Checks Pass: Service responds to health endpoints
  • Functional Testing: gRPC/REST endpoints work correctly
  • Authentication Works: (if enabled) Proper auth validation
  • Performance Acceptable: Response times under acceptable thresholds
  • Logs Clean: No errors or warnings in application logs
  • Resource Usage: CPU/Memory within expected ranges

9. Rollback Testing

Rollback Procedure (if needed):

# In ArgoCD, access application
# Go to "History and Rollback"
# Select previous successful revision
# Click "Rollback"

# Or via kubectl:
kubectl rollout undo deployment/vendor-max-service

# Verify rollback success
kubectl rollout status deployment/vendor-max-service

Post-Rollback Validation:

  • Repeat functional testing checklist
  • Verify previous version is running
  • Confirm issues are resolved

Comprehensive Troubleshooting & Quick Reference

🔧 Complete Troubleshooting Guide

Service Generation Issues:

ProblemSymptomsSolution
Archetect not foundcommand not found: archetectInstall archetect, check PATH
Template not foundfailed to clone repositoryVerify archetype URL, check network
Generation failsIncomplete project structureCheck answers file format, retry
Wrong file structureMissing projects or filesVerify input parameters, regenerate

Build and Compilation Issues:

ProblemSymptomsSolution
Package restore failsUnable to load service indexFix JFrog credentials, check NuGet.config
gRPC build errorsGoogle.Protobuf.Tools.targetsUpdate gRPC packages to compatible versions
Apple Silicon issuesbad CPU type and executableUse ARM64 compatible packages
.NET version conflictsBuild fails with version errorsInstall required .NET SDK version
Entity Framework errorsDatabase context not foundRemove DB references if using "None" persistence

Local Development Issues:

ProblemSymptomsSolution
Service won't startPort binding or startup errorsCheck port conflicts, validate configuration
Tests failingIntegration test failuresComment out DB-dependent tests for "None" persistence
gRPC not workingConnection refusedVerify port numbers, proto definitions
Hot reload brokenChanges not reflectedRestart with dotnet watch run

Deployment and Platform Issues:

ProblemSymptomsSolution
Not in ArgoCDService missing from dashboardCheck CI/CD success, trigger platform reconciliation
ImagePullBackOffRed status in ArgoCDWait 3-5 minutes, verify CI/CD built image
Pods stuck PendingResources not schedulingCheck cluster resources, contact platform team
OutOfSync statusChanges not deployedManual sync in ArgoCD, check auto-sync settings
Health checks failingService degradedCheck application logs, verify health endpoints

Network and Connectivity Issues:

ProblemSymptomsSolution
Port forward failsConnection refusedVerify pod is running, correct service name
URLs not accessible404 or connection timeoutCheck HTTPRoute status, DNS propagation
Authentication failures401 unauthorizedVerify tokens, check auth proxy configuration
Service discovery failsServices can't communicateCheck network policies, service endpoints

📋 Ultimate Quick Reference

🚀 Service Generation Workflow:

# 1. Generate Service
archetect render <archetype-url> ./my-service

# 2. Update JFrog config if needed
# Edit NuGet.config with correct artifactory host

# 3. Build and test
cd my-service
dotnet clean && dotnet restore && dotnet build
dotnet test

# 4. Run locally
cd MyService.Server
dotnet run # localhost:5030

# 5. Test with Postman (gRPC mode)
# URL: localhost:5030

🌐 Gateway Generation Workflow:

# 1. Generate Gateway
archetect render <gateway-archetype-url> ./my-gateway

# 2. Update configs and build
cd my-gateway
# Edit NuGet.config if needed
dotnet build

# 3. Run gateway (with service running)
dotnet run # localhost:5000

# 4. Test with curl/Postman (HTTP mode)
curl http://localhost:5000/api/my-service

📤 Deployment Workflow:

# 1. Initialize Git
git init -b main
git add .
git commit -m "initial commit"

# 2. Create GitHub repo
gh repo create org/my-service --public --source=. --remote=origin
git push -u origin HEAD

# 3. Monitor deployment (5-10 minutes total)
# - GitHub Actions (2-5 min)
# - Platform sync (1-2 min)
# - ArgoCD reconciliation (1-2 min)
# - Pod startup (2-3 min)

🔍 Testing Deployed Services:

# 1. Check ArgoCD
# Status should be: ✅ Synced, 💚 Healthy

# 2. Port forward and test
kubectl port-forward svc/my-service 5030:80
grpcurl -plaintext localhost:5030 list

kubectl port-forward svc/my-gateway 8080:80
curl http://localhost:8080/api/my-service

# 3. Check production URLs (when available)
curl https://my-gateway.your-domain.com/api/my-service

🐛 Emergency Troubleshooting:

# Quick diagnosis commands
kubectl get pods -l app=my-service
kubectl logs deployment/my-service
kubectl describe deployment my-service

# Quick fixes
kubectl rollout restart deployment/my-service # Restart pods
kubectl rollout undo deployment/my-service # Rollback

⚡ Essential Environment Setup:

# Required environment variables
export ARTIFACTORY_USERNAME="your-username"
export ARTIFACTORY_TOKEN="your-token"

# Verify setup
dotnet --version # Should show .NET 8.x or 9.x
archetect --version
gh --version
kubectl version --client

📊 Status Indicators Reference:

ArgoCD Application Status:

  • 🟢 Healthy + Synced = All good, service operational
  • 🟡 Progressing = Deployment in progress, wait
  • 🔴 Degraded = Issues detected, check logs
  • ⚠️ OutOfSync = Changes pending, may auto-sync

Pod Status:

  • Running = Healthy and serving traffic
  • Pending = Waiting for resources/scheduling
  • ImagePullBackOff = Image download issue (usually temporary)
  • CrashLoopBackOff = Application crashing, check logs

Typical Deployment Timeline:

0 min:  Push to GitHub
2 min: CI/CD build complete
4 min: Platform sync, ArgoCD app created
6 min: Image pull, pods starting
8 min: Health checks pass
10 min: Service fully operational

🆘 When to Escalate:

  • Build fails repeatedly after 3 attempts
  • ArgoCD stuck in failed state for >10 minutes
  • Platform reconciliation not working
  • Cluster resource issues (Pending pods for >5 minutes)
  • Network connectivity problems
  • Authentication/authorization configuration issues

Troubleshooting

Build Issues

Problem: Google.Protobuf.Tools.targets could not be loaded Solution:

# Update gRPC packages
dotnet add package Grpc.AspNetCore --version 2.60.0
dotnet restore

Problem: Bad CPU type and executable (Apple Silicon) Solution: Ensure you're using compatible package versions for ARM64

Problem: .NET version compatibility issues Solution:

# Check which .NET versions are installed
dotnet --list-sdks

# If archetype requires .NET 8 but you only have 9:
# Install .NET 8 SDK from Microsoft downloads
# Or check if the archetype can be updated to work with .NET 9

Problem: Unable to load service index (401 Unauthorized) Solution:

  1. Verify Environment Variables Are Set:

    # Mac/Linux
    echo $ARTIFACTORY_USERNAME
    echo $ARTIFACTORY_TOKEN

    # Windows PowerShell
    $env:ARTIFACTORY_USERNAME
    $env:ARTIFACTORY_TOKEN

    # Windows Command Prompt
    echo %ARTIFACTORY_USERNAME%
    echo %ARTIFACTORY_TOKEN%
  2. Check NuGet.config File:

    • Verify the artifactory URL matches your organization's instance
    • Ensure the credentials section references the environment variables correctly
    • Check that the packageSource name matches the credentials section name
  3. Regenerate JFrog Token:

    • Tokens may expire - generate a new one if the current one is old
    • Ensure the new token has proper permissions
    • Update the environment variable with the new token
  4. Test Artifactory Access Directly:

    curl -u "$ARTIFACTORY_USERNAME:$ARTIFACTORY_TOKEN" \
    "https://your-company.jfrog.io/artifactory/api/system/ping"

Problem: Environment variables not persisting Solution:

  1. Mac: Ensure you're editing the correct shell profile (.zshrc vs .bash_profile)
  2. Windows: Use setx for permanent variables, restart terminal after setting
  3. All platforms: Verify variables are set in a new terminal window

Problem: NuGet.config pointing to wrong artifactory Solution:

  1. Check generated NuGet.config file in your service directory
  2. Update the URL to match your organization's JFrog instance:
    <add key="your-company" value="https://your-actual-company.jfrog.io/artifactory/api/nuget/nuget/" />
  3. Remember: This may need to be updated manually after service generation

Integration Test Failures

Problem: Tests fail when persistence is set to "None" Solution:

  1. Comment out failing integration tests
  2. Remove database-related startup configuration
  3. Focus on unit tests until persistence is properly configured

Deployment Issues

Problem: Service not appearing in ArgoCD Solution:

  1. Check GitHub Actions Build:

    • Verify CI/CD pipeline completed successfully
    • Look for any build failures or test failures
    • Ensure Docker image was built and pushed
  2. Verify Platform Sync:

    • Check if service appears in .platform directory
    • Platform sync happens before ArgoCD application creation
    • May take 1-3 minutes after successful CI/CD
  3. Manual Platform Reconciliation:

    • Access platform reconciliation tools
    • Trigger manual sync process to force ArgoCD application creation
    • Contact platform team if reconciliation consistently fails
  4. Check ArgoCD Application Creation:

    • Look for the application in ArgoCD dashboard
    • Applications may appear but be empty initially
    • Allow 2-5 minutes for full resource deployment

Problem: ArgoCD shows "Image Pull Back Off" Solution:

  1. Usually Temporary: This often resolves automatically as the system retries
  2. Check CI/CD: Verify the container image was built and pushed successfully
  3. Wait Period: Allow 3-5 minutes for automatic resolution
  4. Manual Sync: Try manually syncing the ArgoCD application

Problem: Service stuck in "Progressing" state Solution:

  1. Check Pod Logs: Look for startup errors or configuration issues
  2. Resource Limits: Verify cluster has sufficient CPU/memory
  3. Health Checks: Ensure application passes health check endpoints
  4. Dependencies: Verify any required services (databases, etc.) are available

Problem: ArgoCD shows "OutOfSync" Solution:

  1. Auto-Sync Enabled: Most applications auto-sync, wait 1-2 minutes
  2. Manual Sync: Click "Sync" button in ArgoCD if auto-sync is disabled
  3. Configuration Drift: Check if manual changes were made to resources
  4. Refresh: Try refreshing the application in ArgoCD

Problem: Cannot access deployed service URLs Solution:

  1. Check HTTPRoute Status: Verify routes are created and have valid status
  2. DNS Propagation: New domains may take time to propagate
  3. Authentication: Verify you have access permissions for the service
  4. Network Policies: Check if service is behind authentication proxy

Networking Issues

Problem: Can't access deployed service Solution:

  1. Verify pod is running in Kubernetes
  2. Check port-forward command syntax
  3. Ensure firewall/network policies allow traffic

Quick Reference

Common Commands

Service Generation & Development:

# Generate gRPC Service
archetect render <archetype-url> ./my-service

# Generate REST Gateway
archetect render <gateway-archetype-url> ./my-gateway

# Build and Test
dotnet clean && dotnet restore && dotnet build && dotnet test

# Run Service Locally
cd MyService.Server && dotnet run

Git & Deployment:

# Create GitHub Repo
git init -b main && git add . && git commit -m "initial commit"
gh repo create org/repo-name --public --source=. --remote=origin
git push -u origin HEAD

Kubernetes & ArgoCD:

# Port Forward to Deployed Service  
kubectl port-forward svc/my-service 5030:80
kubectl port-forward svc/my-gateway 8080:80

# Check Pod Status
kubectl get pods -l app=my-service

# View Pod Logs
kubectl logs -f deployment/my-service

# Describe Resources
kubectl describe deployment my-service

ArgoCD Monitoring:

# Access ArgoCD (varies by organization)
# Usually: https://argocd.your-domain.com

# Key things to check in ArgoCD:
# - Application Status: Should be "Synced" and "Healthy"
# - Resource View: All pods should be green/running
# - Sync History: Look for any recent sync failures

Environment Variables Checklist

# Required for .NET builds
export ARTIFACTORY_USERNAME="your-username"
export ARTIFACTORY_TOKEN="your-token"

Service Testing Endpoints

  • Local gRPC: localhost:5030
  • Local Management: localhost:5031
  • Deployed gRPC (via port-forward): localhost:5030
  • Deployed REST (via port-forward): localhost:8080
  • Production REST: https://service-name.your-domain.com

ArgoCD Status Quick Reference

Application Health:

  • 🟢 Healthy - All resources running normally
  • 🔴 Degraded - Some resources have issues
  • 🟡 Progressing - Deployment in progress
  • Unknown - Health status cannot be determined

Sync Status:

  • Synced - Deployed state matches desired state
  • ⚠️ OutOfSync - Changes need to be deployed
  • 🔄 Syncing - Deployment in progress

Common Resource States:

  • Running - Pod is healthy and serving traffic
  • Pending - Pod waiting for resources or scheduling
  • ImagePullBackOff - Cannot download container image (usually temporary)
  • CrashLoopBackOff - Pod keeps restarting due to errors

Deployment Timeline:

  1. 0-2 min: GitHub Actions build
  2. 2-4 min: Platform sync and ArgoCD application creation
  3. 4-6 min: Image pull and pod startup
  4. 6-8 min: Service health checks and traffic routing
  5. 8-10 min: Full deployment complete

Additional Resources


This guide was generated from the training session on July 23, 2024. For questions or issues not covered here, reach out to the platform team.