Before diving into this content, it might be helpful to read the following:
- Sent up along with a trace from the LLM application
- Generated by a user in the app inline or in an annotation queue
- Generated by an automatic evaluator during offline evaluation
- Generated by an online evaluator
| Field Name | Type | Description | 
|---|---|---|
| id | UUID | Unique identifier for the record itself | 
| created_at | datetime | Timestamp when the record was created | 
| modified_at | datetime | Timestamp when the record was last modified | 
| session_id | UUID | Unique identifier for the experiment or tracing project the run was a part of | 
| run_id | UUID | Unique identifier for a specific run within a session | 
| key | string | A key describing the criteria of the feedback, eg “correctness” | 
| score | number | Numerical score associated with the feedback key | 
| value | string | Reserved for storing a value associated with the score. Useful for categorical feedback. | 
| comment | string | Any comment or annotation associated with the record. This can be a justification for the score given. | 
| correction | object | Reserved for storing correction details, if any | 
| feedback_source | object | Object containing information about the feedback source | 
| feedback_source.type | string | The type of source where the feedback originated, eg “api”, “app”, “evaluator” | 
| feedback_source.metadata | object | Reserved for additional metadata, currently | 
| feedback_source.user_id | UUID | Unique identifier for the user providing feedback | 
Connect these docs programmatically to Claude, VSCode, and more via MCP for    real-time answers.