Core API¶
The core API provides NumPy-based solvers for conic optimization.
Solver¶
- class moreau.Solver(P, q, A, b, cones, settings=None)¶
Single-problem conic optimization solver.
All problem data is provided in the constructor, then call
solve().Problem Formulation:
minimize (1/2)x'Px + q'x subject to Ax + s = b s in K- Parameters:
P – Quadratic objective matrix (scipy sparse or numpy array). Must be full symmetric (both triangles).
q – Linear objective vector, shape (n,)
A – Constraint matrix (scipy sparse or numpy array), shape (m, n)
b – Constraint RHS vector, shape (m,)
cones – Cone specification (
moreau.Conesobject)settings – Optional solver settings (
moreau.Settingsobject)
Example:
import moreau from scipy import sparse import numpy as np P = sparse.diags([1.0, 1.0], format='csr') q = np.array([2.0, 1.0]) A = sparse.csr_array([[1.0, 1.0], [1.0, 0.0], [0.0, 1.0]]) b = np.array([1.0, 0.7, 0.7]) cones = moreau.Cones(num_zero_cones=1, num_nonneg_cones=2) solver = moreau.Solver(P, q, A, b, cones) solution = solver.solve() print(solution.x, solver.info.status)
- solve(warm_start=None)¶
Solve the optimization problem.
- Parameters:
warm_start – Optional
WarmStartfrom a previous solve (e.g.solution.to_warm_start()). If the warm-started solve fails, it is automatically retried without warm start.- Returns:
Solution object with primal/dual solution vectors (x, z, s)
- Return type:
- backward(dx, dz=None, ds=None)¶
Compute gradients via implicit differentiation using cached state from the last
solve()call.Requires
enable_grad=Truein settings.- Parameters:
dx – Gradient w.r.t. primal solution x, shape (n,)
dz – Optional gradient w.r.t. dual variables z, shape (m,)
ds – Optional gradient w.r.t. slack variables s, shape (m,)
- Returns:
Tuple of gradients
(dP, dA, dq, db)— sparse P/A gradients have the same CSR structure as the input matrices.
CompiledSolver¶
- class moreau.CompiledSolver(n, m, P_row_offsets, P_col_indices, A_row_offsets, A_col_indices, cones, settings=None)¶
Compiled solver for batched problems with shared structure.
Uses a three-step API: construct with structure,
setup()matrix values, thensolve()with parameters.- Parameters:
n – Number of primal variables
m – Number of constraints
P_row_offsets – CSR row pointers for P matrix. P must be full symmetric (both upper and lower triangles).
P_col_indices – CSR column indices for P matrix
A_row_offsets – CSR row pointers for A matrix
A_col_indices – CSR column indices for A matrix
cones – Cone specification (
moreau.Conesobject)settings – Solver settings (
moreau.Settingsobject)
Example:
import moreau import numpy as np cones = moreau.Cones(num_zero_cones=1, num_nonneg_cones=2) settings = moreau.Settings(batch_size=4) solver = moreau.CompiledSolver( n=2, m=3, P_row_offsets=[0, 1, 2], P_col_indices=[0, 1], A_row_offsets=[0, 2, 3, 4], A_col_indices=[0, 1, 0, 1], cones=cones, settings=settings ) solver.setup(P_values=[1., 1.], A_values=[1., 1., 1., 1.]) solution = solver.solve(qs=[[2., 1.]]*4, bs=[[1., 0.7, 0.7]]*4) print(solution.x.shape) # (4, 2)
- setup(P_values, A_values)¶
Set P and A matrix values for the batch.
- Parameters:
P_values – P matrix values. Shape (batch, nnz_P) or (nnz_P,) if shared.
A_values – A matrix values. Shape (batch, nnz_A) or (nnz_A,) if shared.
- solve(qs, bs, warm_start=None)¶
Solve a batch of problems.
When
auto_tune=Trueanddevice='auto'ordirect_solve_method='auto', the first call benchmarks candidate configurations and locks in the fastest for all subsequent solves (see Device Selection). By default (auto_tune=False), a heuristic picks the configuration without benchmarking.- Parameters:
qs – Linear cost vectors, shape (batch, n)
bs – Constraint RHS vectors, shape (batch, m)
warm_start – Optional
WarmStartorBatchedWarmStartfrom a previous solve (e.g.solution.to_warm_start()). If the warm-started solve fails, it is automatically retried without warm start.
- Returns:
Batched solution with arrays of shape (batch, n) or (batch, m)
- Return type:
- backward(dx, dz=None, ds=None)¶
Compute gradients via implicit differentiation using cached state from the last
solve()call.Requires
enable_grad=Truein settings.- Parameters:
dx – Gradient w.r.t. primal solutions x, shape (batch, n)
dz – Optional gradient w.r.t. dual variables z, shape (batch, m)
ds – Optional gradient w.r.t. slack variables s, shape (batch, m)
- Returns:
Tuple of gradients
(dP, dA, dq, db)— sparse P/A gradients have the same CSR structure as the input matrices.
- info: BatchedSolveInfo¶
Metadata from the last
solve()call. ReturnsNoneifsolve()has not been called yet.
- tune_result: TuneResult or None¶
Result from auto-tuning on the first
solve()call. ReturnsNoneif auto-tune has not run (e.g. device and method were set explicitly, orsolve()has not been called). Also available onmoreau.torch.Solverandmoreau.jax.Solver.
Cones¶
- class moreau.Cones(num_zero_cones=0, num_nonneg_cones=0, so_cone_dims=None, num_exp_cones=0, power_alphas=None)¶
Specification for the cone structure K in the constraint
s in K.Constraints must be ordered in A and b to match: zero cones, then nonnegative, then second-order, then exponential, then power.
- Parameters:
num_zero_cones – Number of equality constraints (zero cone, any dimension)
num_nonneg_cones – Number of inequality constraints (nonnegative cone)
so_cone_dims – List of second-order cone dimensions (each >= 2). For example,
so_cone_dims=[3, 5, 10]creates three SOC cones of dimensions 3, 5, and 10. Defaults to empty list.num_exp_cones – Number of exponential cones (dimension 3)
power_alphas – List of alpha values for power cones (dimension 3 each, alpha in (0,1)). Defaults to empty list.
- total_constraints()¶
Total number of scalar constraints.
- Returns:
Sum of all cone dimensions
- degree()¶
Degree of the cone (barrier function degree).
- Returns:
Barrier function degree
SolverType¶
Settings¶
- class moreau.Settings(solver='ipm', device='auto', device_id=-1, batch_size=1, enable_grad=False, auto_tune=False, max_iter=200, time_limit=inf, verbose=False, ipm_settings=None)¶
Solver configuration.
- Parameters:
solver – Solver algorithm (
'ipm'orSolverType.IPM). Currently the only option.device – Device selection (
'auto','cpu','cuda')device_id – CUDA device ID.
-1(default) uses the current device. Ignored whendevice='cpu'.batch_size – Batch size for CompiledSolver (default 1)
enable_grad – Enable gradient computation for
backward()(default False)auto_tune – If
True, benchmark solver configurations on the firstsolve()call whendevice='auto'ordirect_solve_method='auto'. IfFalse(default), use heuristic selection without benchmarking.max_iter – Maximum IPM iterations (default 200, must be >= 1)
time_limit – Time limit in seconds (default infinity, must be > 0)
verbose – Print solver progress (default False)
ipm_settings – Fine-grained IPM settings (
IPMSettingsobject). Auto-created with defaults ifNone.
IPMSettings¶
- class moreau.IPMSettings(tol_gap_abs=1e-8, tol_gap_rel=1e-8, tol_feas=1e-8, tol_infeas_abs=1e-8, tol_infeas_rel=1e-8, tol_ktratio=1e-6, max_step_fraction=0.99, equilibrate_enable=True, direct_solve_method='auto', reduced_tol_gap_abs=5e-5, reduced_tol_gap_rel=5e-5, reduced_tol_feas=1e-4, reduced_tol_infeas_abs=5e-12, reduced_tol_infeas_rel=5e-5, reduced_tol_ktratio=1e-4, warm_start_no_retry=None)¶
Interior-point method settings.
Convergence tolerances (all must be > 0):
- Parameters:
tol_gap_abs – Absolute duality gap tolerance (default 1e-8)
tol_gap_rel – Relative duality gap tolerance (default 1e-8)
tol_feas – Feasibility tolerance (default 1e-8)
tol_infeas_abs – Absolute infeasibility detection tolerance (default 1e-8)
tol_infeas_rel – Relative infeasibility detection tolerance (default 1e-8)
tol_ktratio – KKT ratio tolerance (default 1e-6)
Algorithm control:
- Parameters:
max_step_fraction – Maximum step size fraction per iteration (default 0.99, must be in (0, 1])
equilibrate_enable – Enable matrix equilibration preprocessing (default True)
direct_solve_method –
KKT solver method. Valid options:
'auto'— benchmarked on first solve (default)'qdldl'— CPU only, best for small/sparse KKT systems'faer'— CPU only, multi-threaded, best for large CPU problems'faer-1t'— CPU only, single-threaded faer variant'faer-nt'— CPU only, faer with automatic thread count'cudss'— CUDA only, best for large GPU problems'riccati'— CPU / CUDA, specialized for block-tridiagonal KKT (MPC/LQR problems)'woodbury'— CUDA only, specialized for diagonal P + low-rank A (portfolio-type problems)
Reduced tolerances (for
AlmostSolvedconvergence, all must be > 0):- Parameters:
reduced_tol_gap_abs – Reduced absolute duality gap tolerance (default 5e-5)
reduced_tol_gap_rel – Reduced relative duality gap tolerance (default 5e-5)
reduced_tol_feas – Reduced feasibility tolerance (default 1e-4)
reduced_tol_infeas_abs – Reduced absolute infeasibility tolerance (default 5e-12)
reduced_tol_infeas_rel – Reduced relative infeasibility tolerance (default 5e-5)
reduced_tol_ktratio – Reduced KT ratio tolerance (default 1e-4)
Warm start retry control:
- Parameters:
warm_start_no_retry – Set of
SolverStatusvalues that do not trigger an automatic cold retry when warm starting. Default (None) uses{Solved, AlmostSolved, MaxIterations, CallbackTerminated}. All other statuses trigger an automatic cold retry with a warning. Pass a customfrozensetofSolverStatusvalues to override.
Data Types¶
Solution¶
- class moreau.Solution¶
Single-problem solution.
- x: numpy.ndarray¶
Primal solution, shape (n,)
- z: numpy.ndarray¶
Dual variables, shape (m,)
- s: numpy.ndarray¶
Slack variables, shape (m,)
BatchedSolution¶
- class moreau.BatchedSolution¶
Batched solution with array outputs.
- x: numpy.ndarray¶
Primal solutions, shape (batch, n)
- z: numpy.ndarray¶
Dual variables, shape (batch, m)
- s: numpy.ndarray¶
Slack variables, shape (batch, m)
Supports indexing (
solution[i]returns aSolution),len(), and iteration.- to_warm_start()¶
Create a
BatchedWarmStartfrom this solution (copies arrays).- Return type:
WarmStart¶
- class moreau.WarmStart¶
Warm start point for a single conic optimization problem.
Contains primal, dual, and slack variables from a previous solve that can be used to accelerate convergence on a related problem. Create via
solution.to_warm_start().- x: numpy.ndarray¶
Primal variables, shape (n,)
- z: numpy.ndarray¶
Dual variables, shape (m,)
- s: numpy.ndarray¶
Slack variables, shape (m,)
BatchedWarmStart¶
- class moreau.BatchedWarmStart¶
Warm start point for batched conic optimization problems.
Contains primal, dual, and slack variables from a previous batched solve. Create via
batched_solution.to_warm_start().Supports indexing (
ws[i]returns aWarmStart),len(), and iteration.- x: numpy.ndarray¶
Primal variables, shape (batch_size, n)
- z: numpy.ndarray¶
Dual variables, shape (batch_size, m)
- s: numpy.ndarray¶
Slack variables, shape (batch_size, m)
SolveInfo¶
- class moreau.SolveInfo¶
Metadata from a solve.
- status: SolverStatus¶
Solve outcome
BatchedSolveInfo¶
- class moreau.BatchedSolveInfo¶
Metadata from a batched solve.
- status: list[SolverStatus]¶
Per-problem solve outcome (one per problem in batch)
TuneResult¶
- class moreau.TuneResult¶
Result from auto-tuning on the first
solve()call (whenauto_tune=Trueanddevice='auto'ordirect_solve_method='auto').
SolverStatus¶
- class moreau.SolverStatus¶
Enum (
IntEnum) indicating solve outcome. Supports direct comparison (status == SolverStatus.Solved) and int comparison (status == 1).- Unsolved = 0¶
Not yet solved
- Solved = 1¶
Optimal solution found
- PrimalInfeasible = 2¶
Problem has no feasible solution
- DualInfeasible = 3¶
Problem is unbounded
- AlmostSolved = 4¶
Near-optimal solution (reduced tolerances met)
- AlmostPrimalInfeasible = 5¶
Near-certain primal infeasibility (reduced tolerances)
- AlmostDualInfeasible = 6¶
Near-certain dual infeasibility (reduced tolerances)
- MaxIterations = 7¶
Iteration limit reached
- MaxTime = 8¶
Time limit reached
- NumericalError = 9¶
Numerical issues encountered
- InsufficientProgress = 10¶
Solver stalled (insufficient progress between iterations)
- CallbackTerminated = 11¶
Terminated by user callback
Device Functions¶
- moreau.available_devices()¶
List all available devices, sorted by priority (highest first).
Always includes
'cpu'as a fallback. When CUDA is available it appears first (priority 100 vs CPU priority 0).
- moreau.device_available(device)¶
Check if a specific device is available.
- Parameters:
device – Device name (
'cpu','cuda', etc.)- Returns:
True if available
- Return type:
- moreau.device_error(device)¶
Return the error message if device backend discovery failed.
Useful for diagnosing why a device (e.g. CUDA) is not available.
- Parameters:
device – Device name
- Returns:
Error message string, or
Noneif no error- Return type:
str or None
- moreau.default_device()¶
Return the effective default device (after applying priority and overrides).
- Returns:
Device name (e.g.,
'cpu','cuda')- Return type:
- moreau.get_default_device()¶
Get the current default device override.
- Returns:
Device name if an override is set, otherwise None
- Return type:
str or None
- moreau.set_default_device(device)¶
Set the global default device.
- Parameters:
device – Device name (
'cpu','cuda', etc.) orNoneto reset to automatic selection- Raises:
ValueError – If the specified device is not available
- moreau.torch_available(device=None)¶
Check if PyTorch integration is available.
- Parameters:
device – Optional device name. If
None, checks if any device has torch support.- Returns:
True if PyTorch solver is available
- Return type: