Skip to content

Instantly share code, notes, and snippets.

@ro99
Created February 11, 2026 16:49
Show Gist options
  • Select an option

  • Save ro99/45ffb85888589d9389764696ea62f533 to your computer and use it in GitHub Desktop.

Select an option

Save ro99/45ffb85888589d9389764696ea62f533 to your computer and use it in GitHub Desktop.

Compatibility Patches for macOS Intel

This document tracks all manual patches applied to vendored/installed packages to make the OCR pipeline work on an older macOS Intel machine.

Re-apply the relevant section(s) after upgrading any of the patched packages.


1. PaddleOCR — PaddlePaddle 3.0 Compatibility

PaddleOCR 3.4 expects fused_rms_norm_ext and cal_aux_loss from PaddlePaddle 3.1+, which has no macOS x86_64 wheels. This patch adds compatibility shims.

File to patch

.venv/lib/python3.12/site-packages/paddlex/inference/models/doc_vlm/modeling/paddleocr_vl/_fusion_ops/__init__.py

Instructions

Open the file above and replace lines 18–28 (the imports between """ and __all__):

Original code (remove this)

import paddle
from paddle.incubate.nn.functional import fused_rms_norm_ext
from paddle.incubate.nn.functional import fused_rotary_position_embedding as fused_rope
from paddle.incubate.nn.functional import swiglu as fused_swiglu

from .common_fusion_ops import Linear, matmul

if paddle.device.is_compiled_with_custom_device("npu"):
    from .npu_fusion_ops import npu_cal_aux_loss_func as cal_aux_loss
else:
    from paddle.incubate.nn.functional import cal_aux_loss

Patched code (paste this instead)

import paddle
from paddle.incubate.nn.functional import fused_rotary_position_embedding as fused_rope
from paddle.incubate.nn.functional import swiglu as fused_swiglu

# Compatibility shim: fused_rms_norm_ext was added in PaddlePaddle 3.1+
try:
    from paddle.incubate.nn.functional import fused_rms_norm_ext
except ImportError:
    from paddle.incubate.nn.functional import fused_rms_norm as _fused_rms_norm

    def fused_rms_norm_ext(x, weight, epsilon):
        """Compatibility wrapper for fused_rms_norm_ext using fused_rms_norm."""
        out = _fused_rms_norm(x, weight, None, epsilon, begin_norm_axis=len(x.shape) - 1)
        if isinstance(out, tuple):
            return out
        return (out,)

from .common_fusion_ops import Linear, matmul

# Compatibility shim: cal_aux_loss was added in PaddlePaddle 3.1+
if paddle.device.is_compiled_with_custom_device("npu"):
    from .npu_fusion_ops import npu_cal_aux_loss_func as cal_aux_loss
else:
    try:
        from paddle.incubate.nn.functional import cal_aux_loss
    except ImportError:
        def cal_aux_loss(*args, **kwargs):
            """No-op stub for cal_aux_loss (MoE training only)."""
            return paddle.to_tensor(0.0)

When to re-apply

pip install --upgrade paddleocr
pip install --upgrade paddlex
pip install paddlex[ocr]

2. Transformers — PyTorch 2.2 Compatibility

Transformers (latest) requires PyTorch ≥ 2.4, but this machine is stuck on 2.2.2. Three files need patching. The PP-DocLayout model only uses standard PyTorch APIs available in 2.2, so these patches are safe for this use case.

2a. Lower the PyTorch version gate

File: .venv/lib/python3.12/site-packages/transformers/utils/import_utils.py

In is_torch_available() (around line 110), change 2.4.02.2.0:

-        if is_available and parsed_version < version.parse("2.4.0"):
-            logger.warning_once(f"Disabling PyTorch because PyTorch >= 2.4 is required but found {torch_version}")
-        return is_available and version.parse(torch_version) >= version.parse("2.4.0")
+        if is_available and parsed_version < version.parse("2.2.0"):
+            logger.warning_once(f"Disabling PyTorch because PyTorch >= 2.2 is required but found {torch_version}")
+        return is_available and version.parse(torch_version) >= version.parse("2.2.0")

2b. Handle missing dtype attributes

File: .venv/lib/python3.12/site-packages/transformers/modeling_utils.py

Replace the str_to_torch_dtype dict (around line 274) to use getattr for dtypes added after PyTorch 2.2 (uint16, uint32, uint64, float8_*):

-    "U16": torch.uint16,
+    "U16": getattr(torch, "uint16", None),
     ...
-    "U32": torch.uint32,
+    "U32": getattr(torch, "uint32", None),
     ...
-    "U64": torch.uint64,
-    "F8_E4M3": torch.float8_e4m3fn,
-    "F8_E5M2": torch.float8_e5m2,
-}
+    "U64": getattr(torch, "uint64", None),
+    "F8_E4M3": getattr(torch, "float8_e4m3fn", None),
+    "F8_E5M2": getattr(torch, "float8_e5m2", None),
+}
+str_to_torch_dtype = {k: v for k, v in str_to_torch_dtype.items() if v is not None}

2c. Handle missing torch.get_default_device()

Same file (modeling_utils.py), in get_torch_context_manager_or_global_device() (around line 251):

-    default_device = torch.get_default_device()
+    default_device = torch.get_default_device() if hasattr(torch, "get_default_device") else torch.device("cpu")

When to re-apply

pip install --upgrade transformers

Environment reference

Component Version
macOS 12.7.6
Arch x86_64 (Intel)
Python 3.12
PyTorch 2.2.2
Transformers latest
PaddlePaddle 3.0.0
PaddleOCR 3.4.0
NumPy 1.26.4 (< 2 required)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment