TechLead
Lección 8 de 8

Técnicas avanzadas de prompting

Autoconsistencia, ReAct, Tree of Thoughts y otros métodos de vanguardia

Técnicas avanzadas de prompting

Más allá del prompting básico, los investigadores han desarrollado técnicas sofisticadas que mejoran significativamente el rendimiento de los LLM en tareas complejas. Estos métodos llevan al límite lo que es posible con la ingeniería de prompts.

Autoconsistencia

La autoconsistencia genera múltiples rutas de razonamiento y selecciona la respuesta más común. Es como preguntar a varios expertos y quedarse con el consenso.

// Self-Consistency Implementation
async function selfConsistencyPrompt(question, samples = 5) {
  const responses = await Promise.all(
    Array(samples).fill(null).map(() =>
      prompt(`
        ${question}
        
        Let's think step by step, then give your final answer
        in the format: ANSWER: [your answer]
      `, { temperature: 0.7 }) // Higher temp for diversity
    )
  );
  
  // Extract answers and count votes
  const answers = responses.map(r => 
    r.match(/ANSWER:\s*(.+)/i)?.[1]?.trim()
  );
  
  const votes = {};
  answers.forEach(a => votes[a] = (votes[a] || 0) + 1);
  
  // Return most common answer
  return Object.entries(votes)
    .sort((a, b) => b[1] - a[1])[0][0];
}

// Example usage
const answer = await selfConsistencyPrompt(
  "What is the time complexity of quicksort in the average case?"
);
// Samples might give: O(n log n), O(n log n), O(n²), O(n log n), O(n log n)
// Consensus: O(n log n)

🧠 ReAct: razonamiento + acción

ReAct intercala razonamiento con acciones (como uso de herramientas). El modelo piensa qué hacer, ejecuta una acción, observa el resultado y continúa razonando.

Patrón ReAct

En la práctica, ReAct estructura cada paso con un pensamiento (Thought), una acción (Action) y una observación (Observation), permitiendo al modelo interactuar con herramientas externas de forma razonada.

// ReAct Prompt Structure
"You have access to these tools:
- search(query): Search the web for information
- calculate(expression): Evaluate math expressions
- lookup(term): Look up a term in a knowledge base

For each step, use this format:
Thought: [your reasoning about what to do next]
Action: [tool_name(input)]
Observation: [result from the tool]
... (repeat until done)
Final Answer: [your answer]

Question: What is the population of France divided by 10?"

// Model Response:
Thought: I need to find the population of France first.
Action: search("population of France 2024")
Observation: France has a population of approximately 68 million.

Thought: Now I need to divide 68 million by 10.
Action: calculate(68000000 / 10)
Observation: 6800000

Thought: I have the answer.
Final Answer: The population of France divided by 10 is 6.8 million.

Tree of Thoughts (ToT)

Tree of Thoughts explora múltiples rutas de razonamiento en una estructura de árbol, evaluando y podando ramas para encontrar la mejor solución.

// Tree of Thoughts Implementation
async function treeOfThoughts(problem, maxDepth = 3) {
  // Generate initial thoughts
  const thoughts = await prompt(`
    Problem: ${problem}
    
    Generate 3 different initial approaches to solve this.
    Format each as: APPROACH 1: [description]
  `);
  
  const approaches = parseApproaches(thoughts);
  
  // Evaluate each approach
  const evaluated = await Promise.all(
    approaches.map(async approach => {
      const evaluation = await prompt(`
        Problem: ${problem}
        Approach: ${approach}
        
        Rate this approach 1-10 for:
        - Likely correctness
        - Efficiency
        - Completeness
        
        Return: SCORE: [number] REASON: [brief reason]
      `);
      
      return { approach, ...parseScore(evaluation) };
    })
  );
  
  // Continue with best approach
  const best = evaluated.sort((a, b) => b.score - a.score)[0];
  
  // Develop the winning approach further
  const solution = await prompt(`
    Problem: ${problem}
    Best Approach: ${best.approach}
    
    Develop this approach into a complete solution.
    Show your step-by-step reasoning.
  `);
  
  return solution;
}

// Example: Solving an algorithm problem
const solution = await treeOfThoughts(
  "Find the longest palindromic substring in a string"
);
// Explores: brute force, dynamic programming, expand around center
// Evaluates each, develops the best one

Prompting de menor a mayor

Divide un problema complejo en subproblemas, resuélvelos de lo más simple a lo más difícil y construye sobre soluciones anteriores.

// Least-to-Most Pattern
async function leastToMost(complexProblem) {
  // Step 1: Decompose into subproblems
  const subproblems = await prompt(`
    Break this problem into smaller subproblems,
    ordered from simplest to most complex:
    
    Problem: ${complexProblem}
    
    List subproblems, simplest first:
  `);
  
  // Step 2: Solve each subproblem, building context
  let context = "";
  const solutions = [];
  
  for (const subproblem of parseSubproblems(subproblems)) {
    const solution = await prompt(`
      Using what we've solved so far:
      ${context}
      
      Now solve: ${subproblem}
    `);
    
    solutions.push(solution);
    context += `\nSolved: ${subproblem}\nSolution: ${solution}\n`;
  }
  
  // Step 3: Combine into final solution
  const finalSolution = await prompt(`
    Given these solutions to subproblems:
    ${context}
    
    Provide the complete solution to: ${complexProblem}
  `);
  
  return finalSolution;
}

// Example
const solution = await leastToMost(
  "Build a real-time collaborative text editor"
);
// Subproblems: 1) Basic text editing, 2) WebSocket connection,
// 3) Conflict resolution, 4) Operational transform, etc.

Optimización automática de prompts

La optimización automática (APO) usa el propio modelo para mejorar prompts iterativamente: prueba con casos de referencia, identifica fallos y genera versiones mejoradas hasta alcanzar la precisión deseada.

// APO: Let the model improve prompts
async function optimizePrompt(initialPrompt, testCases, iterations = 3) {
  let currentPrompt = initialPrompt;
  
  for (let i = 0; i < iterations; i++) {
    // Test current prompt
    const results = await Promise.all(
      testCases.map(async tc => {
        const output = await prompt(currentPrompt + tc.input);
        return {
          input: tc.input,
          expected: tc.expected,
          actual: output,
          correct: output.includes(tc.expected)
        };
      })
    );
    
    const score = results.filter(r => r.correct).length / results.length;
    
    if (score === 1) break; // Perfect score
    
    // Ask model to improve the prompt
    const failures = results.filter(r => !r.correct);
    
    currentPrompt = await prompt(`
      This prompt scored ${score * 100}% on test cases.
      
      Current prompt: ${currentPrompt}
      
      Failed cases:
      ${failures.map(f => 
        `Input: ${f.input}, Expected: ${f.expected}, Got: ${f.actual}`
      ).join('\n')}
      
      Improve the prompt to handle these cases.
      Return only the improved prompt.
    `);
  }
  
  return currentPrompt;
}

// Example: Optimize a classification prompt
const betterPrompt = await optimizePrompt(
  "Classify this email as spam or not spam: ",
  [
    { input: "Win a free iPhone!", expected: "spam" },
    { input: "Meeting at 3pm tomorrow", expected: "not spam" },
    // more test cases...
  ]
);

Meta-prompting

El meta-prompting consiste en usar la IA para generar el prompt óptimo para una tarea dada. El modelo actúa como experto en ingeniería de prompts, considerando rol, contexto, formato y ejemplos necesarios.

// Use AI to generate prompts
async function metaPrompt(task, context) {
  // Generate an optimal prompt for the task
  const generatedPrompt = await prompt(`
    You are a prompt engineering expert.
    
    Create an optimal prompt for this task:
    Task: ${task}
    Context: ${context}
    
    Consider:
    - What role should the AI play?
    - What context is needed?
    - What format should the output be?
    - What examples would help?
    - What constraints are important?
    
    Return the complete prompt ready to use.
  `);
  
  return generatedPrompt;
}

// Example: Generate a prompt for code review
const reviewPrompt = await metaPrompt(
  "Review React code for performance issues",
  "Senior frontend developer, production codebase, strict review"
);

// Generated prompt might include:
// - React performance expert role
// - Specific patterns to look for (memo, useMemo, useCallback)
// - Format for feedback
// - Severity levels
// - Example of good feedback

Guía de selección de técnicas

Técnica Mejor para Compromiso
Chain of Thought Razonamiento, matemáticas, lógica Más tokens
Self-Consistency Decisiones de alto riesgo Mayor costo (múltiples llamadas)
ReAct Tareas que requieren datos externos Implementación compleja
Tree of Thoughts Resolución de problemas complejos Muchas llamadas a la API
Least-to-Most Problemas de varios pasos Secuencial (más lento)

✅ Cuándo usar técnicas avanzadas

  • • Razonamiento complejo donde el prompting básico falla
  • • Decisiones de alto riesgo que requieren verificación
  • • Tareas que se benefician de múltiples perspectivas
  • • Problemas que pueden descomponerse en subproblemas
  • • Cuando necesitas integrar herramientas/datos externos