PyTorch API¶
The PyTorch integration provides differentiable optimization layers with full autograd support.
Solver¶
- class moreau.torch.Solver(n, m, P_row_offsets, P_col_indices, A_row_offsets, A_col_indices, cones, settings=None)¶
Unified PyTorch solver with automatic device selection and autograd support.
- Parameters:
n – Number of primal variables
m – Number of constraints
P_row_offsets – CSR row pointers for P matrix (torch.Tensor)
P_col_indices – CSR column indices for P matrix (torch.Tensor)
A_row_offsets – CSR row pointers for A matrix (torch.Tensor)
A_col_indices – CSR column indices for A matrix (torch.Tensor)
cones – Cone specification (moreau.Cones object)
settings – Optional solver settings (moreau.Settings object)
Example:
import torch from moreau.torch import Solver import moreau cones = moreau.Cones(num_nonneg_cones=2) settings = moreau.Settings(device='cuda', batch_size=64) solver = Solver( n=2, m=2, P_row_offsets=torch.tensor([0, 1, 2]), P_col_indices=torch.tensor([0, 1]), A_row_offsets=torch.tensor([0, 1, 2]), A_col_indices=torch.tensor([0, 1]), cones=cones, settings=settings, ) solver.setup(P_values, A_values) solution = solver.solve(q, b) print(solver.info.status, solver.info.obj_val)
- setup(P_values, A_values)¶
Set P and A matrix values.
Must be called before
solve(). Can be called multiple times to update values for repeated solves with the same structure.- Parameters:
P_values – P matrix values, shape (batch, nnzP) or (nnzP,), dtype=float64
A_values – A matrix values, shape (batch, nnzA) or (nnzA,), dtype=float64
- solve(q, b)¶
Solve the optimization problem.
If any input has
requires_grad=True, the outputs support automatic differentiation vialoss.backward().- Parameters:
q – Linear cost vector, shape (batch, n) or (n,), dtype=float64
b – Constraint RHS, shape (batch, m) or (m,), dtype=float64
- Returns:
Solution object (TorchSolution or TorchBatchedSolution)
- info¶
Metadata from the last
solve()call.
Functional API¶
- moreau.torch.solver(n, m, P_row_offsets, P_col_indices, A_row_offsets, A_col_indices, cones, settings=None)¶
Create a PyTorch-compatible solve function.
Returns a function that solves conic optimization problems with autograd support.
- Parameters:
n – Number of primal variables
m – Number of constraints
P_row_offsets – CSR row pointers for P matrix (torch.Tensor)
P_col_indices – CSR column indices for P matrix (torch.Tensor)
A_row_offsets – CSR row pointers for A matrix (torch.Tensor)
A_col_indices – CSR column indices for A matrix (torch.Tensor)
cones – Cone specification (moreau.Cones object)
settings – Optional solver settings (moreau.Settings object)
- Returns:
Function with signature
(P_values, A_values, q, b) -> (Solution, Info)
Example:
from moreau.torch import solver import moreau cones = moreau.Cones(num_nonneg_cones=2) solve = solver(n=2, m=2, P_row_offsets=..., P_col_indices=..., A_row_offsets=..., A_col_indices=..., cones=cones) solution, info = solve(P_values, A_values, q, b) print(solution.x, info.status)
Gradient Computation¶
Gradients flow through the solver via implicit differentiation:
import torch
from moreau.torch import Solver
import moreau
cones = moreau.Cones(num_nonneg_cones=2)
solver = Solver(
n=2, m=2,
P_row_offsets=torch.tensor([0, 1, 2]),
P_col_indices=torch.tensor([0, 1]),
A_row_offsets=torch.tensor([0, 1, 2]),
A_col_indices=torch.tensor([0, 1]),
cones=cones,
)
P_values = torch.tensor([1.0, 1.0], dtype=torch.float64)
A_values = torch.tensor([1.0, 1.0], dtype=torch.float64)
solver.setup(P_values, A_values)
# Enable gradients on inputs
q = torch.tensor([1.0, 1.0], dtype=torch.float64, requires_grad=True)
b = torch.tensor([0.5, 0.5], dtype=torch.float64)
# Solve
solution = solver.solve(q, b)
# Backpropagate
loss = solution.x.sum()
loss.backward()
# Access gradients
print(q.grad) # dL/dq
GPU Usage¶
For GPU acceleration:
import torch
from moreau.torch import Solver
import moreau
settings = moreau.Settings(device='cuda', batch_size=256)
solver = Solver(n=2, m=2, ..., cones=cones, settings=settings)
# Keep tensors on GPU
P_values = torch.tensor([1., 1.], dtype=torch.float64, device='cuda')
A_values = torch.tensor([1., 1.], dtype=torch.float64, device='cuda')
q = torch.randn(256, 2, dtype=torch.float64, device='cuda', requires_grad=True)
b = torch.randn(256, 2, dtype=torch.float64, device='cuda')
solver.setup(P_values, A_values)
solution = solver.solve(q, b)
Data Types¶
TorchSolution¶
- class moreau.torch.TorchSolution¶
Single-problem solution with PyTorch tensors.
- x: torch.Tensor¶
Primal solution, shape (n,)
- z: torch.Tensor¶
Dual variables, shape (m,)
- s: torch.Tensor¶
Slack variables, shape (m,)
TorchBatchedSolution¶
- class moreau.torch.TorchBatchedSolution¶
Batched solution with PyTorch tensors.
- x: torch.Tensor¶
Primal solutions, shape (batch, n)
- z: torch.Tensor¶
Dual variables, shape (batch, m)
- s: torch.Tensor¶
Slack variables, shape (batch, m)