Last active
January 3, 2026 21:17
-
-
Save JoeJoe1313/d6c4c92ce18d7a537df39041853055ea to your computer and use it in GitHub Desktop.
Thinking Backwards: The "Reversal Blessing" in LLM Multiple-Choice Reasoning
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "cells": [ | |
| { | |
| "cell_type": "markdown", | |
| "id": "583c59cd", | |
| "metadata": {}, | |
| "source": [ | |
| "Let $q$ be a question with answer candidates $\\{a_1, a_2, \\dots, a_n\\}$. L2R models compute a score for each answer $a_i$ given the question $q$. This score is typically the log-probability of the answer, normalized by its length $N_i$ to prevent bias towards shorter answers\n", | |
| "\n", | |
| "$$\n", | |
| "s_i^{(L2R)} = \\frac{1}{N_i} \\log p_{L2R}(a_i \\mid q),\n", | |
| "$$\n", | |
| "\n", | |
| "The model then selects the answer with the highest score. This approach, however, can suffer from \"surface-form competition,\" where semantically similar answers (e.g., “dog” vs. “puppy”) split the probability mass, penalizing the correct answer concept." | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "id": "99bb236b", | |
| "metadata": {}, | |
| "source": [] | |
| } | |
| ], | |
| "metadata": { | |
| "language_info": { | |
| "name": "python" | |
| } | |
| }, | |
| "nbformat": 4, | |
| "nbformat_minor": 5 | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment