Clase 07 — Testing de APIs
¿Por qué testear APIs?
| Tipo de test | Qué valida | Herramienta |
|---|---|---|
| Funcional | ¿Hace lo que debe? | curl, Postman, pytest |
| Contrato | ¿Respeta la especificación? | Schemathesis, Dredd |
| Rendimiento | ¿Aguanta carga? | Apache Bench, k6, wrk |
| Seguridad | ¿Es segura? | OWASP ZAP, Burp Suite |
| Smoke | ¿Está viva? | curl (health check) |
| Integración | ¿Funciona con otros servicios? | pytest, scripts bash |
Testing funcional con curl
#!/bin/bash
# test-api.sh - Suite de tests para API REST
BASE_URL="${1:-http://localhost:8000}"
PASSED=0; FAILED=0; TOTAL=0
RED='\033[0;31m'; GREEN='\033[0;32m'; NC='\033[0m'
assert_status() {
local desc="$1" method="$2" url="$3" expected="$4"
shift 4; local args=("$@")
TOTAL=$((TOTAL + 1))
local actual
actual=$(curl -s -o /dev/null -w "%{http_code}" -X "$method" "${args[@]}" "$url")
if [[ "$actual" == "$expected" ]]; then
printf "${GREEN} ✅ PASS${NC} %s (HTTP %s)\n" "$desc" "$actual"
PASSED=$((PASSED + 1))
else
printf "${RED} ❌ FAIL${NC} %s (esperado: %s, obtenido: %s)\n" "$desc" "$expected" "$actual"
FAILED=$((FAILED + 1))
fi
}
assert_json() {
local desc="$1" url="$2" jq_filter="$3" expected="$4"
TOTAL=$((TOTAL + 1))
local actual
actual=$(curl -s "$url" | jq -r "$jq_filter")
if [[ "$actual" == "$expected" ]]; then
printf "${GREEN} ✅ PASS${NC} %s (%s)\n" "$desc" "$actual"
PASSED=$((PASSED + 1))
else
printf "${RED} ❌ FAIL${NC} %s (esperado: '%s', obtenido: '%s')\n" "$desc" "$expected" "$actual"
FAILED=$((FAILED + 1))
fi
}
echo "═══════════════════════════════════════"
echo " API Test Suite: $BASE_URL"
echo "═══════════════════════════════════════"
echo "📋 Health Check:"
assert_status "GET /health → 200" GET "$BASE_URL/health" 200
assert_json "Health status es ok" "$BASE_URL/health" ".status" "ok"
echo ""
echo "📋 CRUD - Pokémon:"
assert_status "GET /pokemon → 200" GET "$BASE_URL/api/v1/pokemon" 200
assert_json "Lista tiene datos" "$BASE_URL/api/v1/pokemon" ".data | length > 0" "true"
echo ""
echo "📋 CREATE:"
assert_status "POST /pokemon → 201" POST "$BASE_URL/api/v1/pokemon" 201 \
-H "Content-Type: application/json" \
-d '{"nombre":"ditto","tipo":"normal","nivel":10,"hp":48}'
echo ""
echo "📋 Error Cases:"
assert_status "GET /nonexistent → 404" GET "$BASE_URL/api/v1/pokemon/99999" 404
echo ""
echo "═══════════════════════════════════════"
echo " Resultados: $PASSED/$TOTAL pasaron"
[[ $FAILED -gt 0 ]] && echo -e " ${RED}$FAILED tests fallidos${NC}"
[[ $FAILED -eq 0 ]] && echo -e " ${GREEN}Todos los tests pasaron ✅${NC}"
echo "═══════════════════════════════════════"
exit $FAILED
Testing con pytest (Python)
# test_api.py
import pytest
import requests
BASE_URL = "http://localhost:8000"
class TestHealthCheck:
def test_health_returns_200(self):
r = requests.get(f"{BASE_URL}/health")
assert r.status_code == 200
def test_health_status_ok(self):
r = requests.get(f"{BASE_URL}/health")
assert r.json()["status"] == "ok"
class TestPokemonCRUD:
def test_list_pokemon(self):
r = requests.get(f"{BASE_URL}/api/v1/pokemon")
assert r.status_code == 200
assert len(r.json()["data"]) > 0
def test_get_pokemon(self):
r = requests.get(f"{BASE_URL}/api/v1/pokemon/25")
assert r.status_code == 200
assert r.json()["nombre"] == "pikachu"
def test_get_pokemon_not_found(self):
r = requests.get(f"{BASE_URL}/api/v1/pokemon/99999")
assert r.status_code == 404
def test_create_pokemon(self):
data = {"nombre": "mewtwo", "tipo": "psychic", "nivel": 70, "hp": 106}
r = requests.post(f"{BASE_URL}/api/v1/pokemon", json=data)
assert r.status_code == 201
assert r.json()["nombre"] == "mewtwo"
def test_delete_pokemon(self):
r = requests.post(f"{BASE_URL}/api/v1/pokemon",
json={"nombre": "temp", "tipo": "normal"})
pokemon_id = r.json()["id"]
r = requests.delete(f"{BASE_URL}/api/v1/pokemon/{pokemon_id}")
assert r.status_code == 204
r = requests.get(f"{BASE_URL}/api/v1/pokemon/{pokemon_id}")
assert r.status_code == 404
pip install pytest requests
pytest test_api.py -v
Testing de rendimiento
Apache Bench (ab)
# 100 peticiones, 10 concurrentes
ab -n 100 -c 10 http://localhost:8000/api/v1/pokemon
# Resultado importante:
# Requests per second: 2500.00 [#/sec]
# Time per request: 4.000 [ms]
Script: Benchmark simple con curl
#!/bin/bash
URL="${1:-http://localhost:8000/api/v1/pokemon}"
REQUESTS="${2:-50}"
echo "Benchmark: $URL ($REQUESTS peticiones)"
times=()
for i in $(seq 1 "$REQUESTS"); do
time=$(curl -s -o /dev/null -w "%{time_total}" --max-time 10 "$URL")
times+=("$time")
done
sorted=($(printf '%s\n' "${times[@]}" | sort -n))
total=${#sorted[@]}
min=${sorted[0]}
max=${sorted[-1]}
p50=${sorted[$((total * 50 / 100))]}
p95=${sorted[$((total * 95 / 100))]}
sum=$(printf '%s\n' "${times[@]}" | paste -sd+ | bc)
avg=$(echo "scale=3; $sum / $total" | bc)
echo " Min: ${min}s | Max: ${max}s | Avg: ${avg}s"
echo " P50: ${p50}s | P95: ${p95}s"
echo " Req/s: $(echo "scale=1; $total / $sum" | bc)"
Smoke Tests en CI/CD
#!/bin/bash
# smoke-test.sh
BASE_URL="${1:?Uso: $0 <base_url>}"
FAILURES=0
smoke() {
local desc="$1" method="$2" path="$3" expected="$4"
local code
code=$(curl -s -o /dev/null -w "%{http_code}" -X "$method" --max-time 10 "${BASE_URL}${path}")
if [[ "$code" == "$expected" ]]; then
echo " ✅ $desc"
else
echo " ❌ $desc (HTTP $code, esperado $expected)"
((FAILURES++))
fi
}
echo "Smoke Tests: $BASE_URL"
smoke "Health check" GET "/health" 200
smoke "API responde" GET "/api/v1/pokemon" 200
smoke "Recurso existe" GET "/api/v1/pokemon/25" 200
smoke "404 funciona" GET "/api/v1/pokemon/99999" 404
[[ $FAILURES -eq 0 ]] && echo "✅ Todos pasaron" || echo "❌ $FAILURES fallaron"
exit $FAILURES
En GitHub Actions
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Start API
run: docker compose up -d && sleep 10
- name: Smoke Tests
run: ./smoke-test.sh http://localhost:8000
- name: Full Tests
run: pip install pytest requests && pytest test_api.py -v
- name: Performance
run: ab -n 100 -c 10 http://localhost:8000/api/v1/pokemon
Ejercicios
- Creá un script de tests con
assert_statusyassert_jsonpara una API REST - Escribí tests en pytest que cubran CRUD + validaciones + errores
- Ejecutá Apache Bench con diferentes niveles de concurrencia y compará resultados
- Implementá smoke tests para usar en un pipeline de CI/CD
- Creá un benchmark que mida TTFB, throughput y percentiles (P50, P95, P99)