PyTorch API¶
The PyTorch integration provides differentiable optimization layers with full autograd support.
Solver¶
- class moreau.torch.Solver(n, m, P_row_offsets, P_col_indices, A_row_offsets, A_col_indices, cones, settings=None)¶
Unified PyTorch solver with automatic device selection and autograd support.
enable_gradis forced toTruefor PyTorch solvers (gradients are always enabled).- Parameters:
n – Number of primal variables
m – Number of constraints
P_row_offsets – CSR row pointers for P matrix (torch.Tensor). P must be full symmetric (both upper and lower triangles).
P_col_indices – CSR column indices for P matrix (torch.Tensor)
A_row_offsets – CSR row pointers for A matrix (torch.Tensor)
A_col_indices – CSR column indices for A matrix (torch.Tensor)
cones – Cone specification (moreau.Cones object)
settings – Optional solver settings (moreau.Settings object)
Example:
import torch from moreau.torch import Solver import moreau cones = moreau.Cones(num_nonneg_cones=2) settings = moreau.Settings(device='cuda', batch_size=64) solver = Solver( n=2, m=2, P_row_offsets=torch.tensor([0, 1, 2]), P_col_indices=torch.tensor([0, 1]), A_row_offsets=torch.tensor([0, 1, 2]), A_col_indices=torch.tensor([0, 1]), cones=cones, settings=settings, ) solver.setup(P_values, A_values) solution = solver.solve(q, b) print(solver.info.status[0], solver.info.obj_val[0])
- setup(P_values, A_values)¶
Set P and A matrix values.
Must be called before
solve(). Can be called multiple times to update values for repeated solves with the same structure.- Parameters:
P_values – P matrix values, shape (batch, nnzP) or (nnzP,), dtype=float64
A_values – A matrix values, shape (batch, nnzA) or (nnzA,), dtype=float64
- solve(q, b, warm_start=None)¶
Solve the optimization problem.
If any input has
requires_grad=True, the outputs support automatic differentiation vialoss.backward().- Parameters:
q – Linear cost vector, shape (batch, n) or (n,), dtype=float64
b – Constraint RHS, shape (batch, m) or (m,), dtype=float64
warm_start – Optional
WarmStartorBatchedWarmStartfrom a previous solve (e.g.solution.to_warm_start()). If the warm-started solve fails, it is automatically retried without warm start.
- Returns:
Solution object (TorchSolution or TorchBatchedSolution)
- backward(dx, dz=None, ds=None)¶
Compute gradients via implicit differentiation.
- Parameters:
dx – Gradient w.r.t. primal solution x (torch.Tensor)
dz – Optional gradient w.r.t. dual variables z (torch.Tensor)
ds – Optional gradient w.r.t. slack variables s (torch.Tensor)
- Returns:
Tuple of gradient tensors (dP, dA, dq, db)
- setup_grad(batch_size=None)¶
Pre-allocate memory for gradient computation (backward pass).
Optional but recommended when calling
backward()repeatedly.- Parameters:
batch_size – Optional batch size for pre-allocation.
- reset()¶
Reset solver state.
- info¶
Metadata from the last
solve()call. ReturnsNoneifsolve()has not been called yet. Type isTorchSolveInfo(single problem) orTorchBatchedSolveInfo(batched).
- tune_result: TuneResult or None¶
Result from auto-tuning on the first
solve()call. ReturnsNoneif auto-tune has not run (e.g. device and method were set explicitly, orsolve()has not been called).
Functional API¶
- moreau.torch.solver(n, m, P_row_offsets, P_col_indices, A_row_offsets, A_col_indices, cones, settings=None)¶
Create a PyTorch-compatible solve function.
Returns a function that solves conic optimization problems with autograd support.
- Parameters:
n – Number of primal variables
m – Number of constraints
P_row_offsets – CSR row pointers for P matrix (torch.Tensor). P must be full symmetric (both upper and lower triangles).
P_col_indices – CSR column indices for P matrix (torch.Tensor)
A_row_offsets – CSR row pointers for A matrix (torch.Tensor)
A_col_indices – CSR column indices for A matrix (torch.Tensor)
cones – Cone specification (moreau.Cones object)
settings – Optional solver settings (moreau.Settings object)
- Returns:
Function with signature
(P_values, A_values, q, b) -> (Solution, Info)
Example:
from moreau.torch import solver import moreau cones = moreau.Cones(num_nonneg_cones=2) solve = solver(n=2, m=2, P_row_offsets=..., P_col_indices=..., A_row_offsets=..., A_col_indices=..., cones=cones) solution, info = solve(P_values, A_values, q, b) print(solution.x, info.status)
Gradient Computation¶
Gradients flow through the solver via implicit differentiation:
import torch
from moreau.torch import Solver
import moreau
cones = moreau.Cones(num_nonneg_cones=2)
solver = Solver(
n=2, m=2,
P_row_offsets=torch.tensor([0, 1, 2]),
P_col_indices=torch.tensor([0, 1]),
A_row_offsets=torch.tensor([0, 1, 2]),
A_col_indices=torch.tensor([0, 1]),
cones=cones,
)
P_values = torch.tensor([1.0, 1.0], dtype=torch.float64)
A_values = torch.tensor([1.0, 1.0], dtype=torch.float64)
solver.setup(P_values, A_values)
# Enable gradients on inputs
q = torch.tensor([1.0, 1.0], dtype=torch.float64, requires_grad=True)
b = torch.tensor([0.5, 0.5], dtype=torch.float64)
# Solve
solution = solver.solve(q, b)
# Backpropagate
loss = solution.x.sum()
loss.backward()
# Access gradients
print(q.grad) # dL/dq
GPU Usage¶
For GPU acceleration:
import torch
from moreau.torch import Solver
import moreau
settings = moreau.Settings(device='cuda', batch_size=256)
solver = Solver(n=2, m=2, ..., cones=cones, settings=settings)
# Keep tensors on GPU
P_values = torch.tensor([1., 1.], dtype=torch.float64, device='cuda')
A_values = torch.tensor([1., 1.], dtype=torch.float64, device='cuda')
q = torch.randn(256, 2, dtype=torch.float64, device='cuda', requires_grad=True)
b = torch.randn(256, 2, dtype=torch.float64, device='cuda')
solver.setup(P_values, A_values)
solution = solver.solve(q, b)
Data Types¶
TorchSolution¶
- class moreau.torch.TorchSolution¶
Single-problem solution with PyTorch tensors.
- x: torch.Tensor¶
Primal solution, shape (n,)
- z: torch.Tensor¶
Dual variables, shape (m,)
- s: torch.Tensor¶
Slack variables, shape (m,)
TorchBatchedSolution¶
- class moreau.torch.TorchBatchedSolution¶
Batched solution with PyTorch tensors.
Supports indexing (
solution[i]returns aTorchSolution),len(), and iteration.- x: torch.Tensor¶
Primal solutions, shape (batch, n)
- z: torch.Tensor¶
Dual variables, shape (batch, m)
- s: torch.Tensor¶
Slack variables, shape (batch, m)
- to_warm_start()¶
Create a
BatchedWarmStartfrom this solution (detaches and moves to CPU).- Return type:
TorchSolveInfo¶
- class moreau.torch.TorchSolveInfo¶
Metadata from a single-problem PyTorch solve.
- status: SolverStatus¶
Solve outcome
- obj_val: torch.Tensor¶
Objective value tensor
- iterations: torch.Tensor¶
Iteration count tensor
TorchBatchedSolveInfo¶
- class moreau.torch.TorchBatchedSolveInfo¶
Metadata from a batched PyTorch solve.
- status: list[SolverStatus]¶
Per-problem solve outcome
- obj_val: torch.Tensor¶
Per-problem objective values, shape (batch_size,)
- iterations: torch.Tensor¶
Iteration count tensor