Quickstart¶
Solve your first convex optimization problem in 4 steps.
Prerequisites
Python 3.9+
pip install moreauFor gradients: PyTorch or JAX
1 Define the Problem¶
Create a convex conic optimization problem:
import moreau
import numpy as np
from scipy import sparse
# Problem dimensions
n = 2 # variables
m = 3 # constraints
# Objective: (1/2)x'Px + q'x
P = sparse.diags([1.0, 1.0], format='csr') # Quadratic term
q = np.array([2.0, 1.0]) # Linear term
# Constraints: Ax + s = b, s in K
A = sparse.csr_matrix([
[1.0, 1.0], # x1 + x2 = 1 (equality via zero cone)
[1.0, 0.0], # x1 >= 0.7 (inequality via nonneg cone)
[0.0, 1.0], # x2 >= 0.7
])
b = np.array([1.0, 0.7, 0.7])
Tip
Sparse matrices are recommended for performance. Use scipy.sparse CSR format.
2 Specify Cones¶
Define the cone structure for your constraints:
# First constraint uses zero cone (equality)
# Next two use nonnegative cone (inequalities)
cones = moreau.Cones(
num_zero_cones=1, # 1 equality constraint
num_nonneg_cones=2, # 2 inequality constraints
)
Available Cones
Cone |
Description |
Typical Use |
|---|---|---|
|
Equality: s = 0 |
Linear equalities |
|
Inequality: s >= 0 |
Linear inequalities |
|
Second-order (dim 3) |
Norm constraints |
|
Exponential (dim 3) |
Log/exp constraints |
|
Power (dim 3) |
Power function constraints |
3 Create Solver & Solve¶
Create a solver and solve the problem:
# Create solver with problem data
solver = moreau.Solver(P, q, A, b, cones=cones)
# Solve
solution = solver.solve()
print(f"Optimal x: {solution.x}")
print(f"Status: {solver.info.status}")
print(f"Objective: {solver.info.obj_val}")
4 Check Results¶
Access solution and solver information:
# Primal solution
x = solution.x # Optimal x values
# Dual variables
z = solution.z # Dual variables
s = solution.s # Slack variables
# Solver metadata
info = solver.info
print(f"Status: {info.status}") # SolverStatus.Solved
print(f"Objective: {info.obj_val:.4f}") # Optimal objective value
print(f"Iterations: {info.iterations}") # IPM iterations
print(f"Solve time: {info.solve_time:.4f}s")
Complete Example¶
Here’s everything together:
import moreau
import numpy as np
from scipy import sparse
# 1. Define problem
P = sparse.diags([1.0, 1.0], format='csr')
q = np.array([2.0, 1.0])
A = sparse.csr_matrix([[1.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
b = np.array([1.0, 0.7, 0.7])
# 2. Specify cones
cones = moreau.Cones(num_zero_cones=1, num_nonneg_cones=2)
# 3. Solve
solver = moreau.Solver(P, q, A, b, cones=cones)
solution = solver.solve()
# 4. Results
print(f"x = {solution.x}")
print(f"Status: {solver.info.status}")
Batched Solving¶
For multiple problems with the same structure, use CompiledSolver:
import moreau
import numpy as np
# Define structure (shared across batch)
cones = moreau.Cones(num_zero_cones=1, num_nonneg_cones=2)
settings = moreau.Settings(batch_size=4)
solver = moreau.CompiledSolver(
n=2, m=3,
P_row_offsets=[0, 1, 2], P_col_indices=[0, 1],
A_row_offsets=[0, 2, 3, 4], A_col_indices=[0, 1, 0, 1],
cones=cones,
settings=settings,
)
# Set matrix values (can be shared or per-problem)
solver.setup(
P_values=[1.0, 1.0], # Shared across batch
A_values=[1.0, 1.0, 1.0, 1.0],
)
# Solve batch
qs = np.array([[2.0, 1.0]] * 4)
bs = np.array([[1.0, 0.7, 0.7]] * 4)
solution = solver.solve(qs, bs)
print(f"Batch solutions shape: {solution.x.shape}") # (4, 2)
print(f"First status: {solver.info.status[0]}")
Next Steps¶
Use optimization layers in neural networks with autograd support.
Functional API compatible with vmap, jit, and grad.
Solve thousands of problems in parallel for maximum throughput.
Real-world applications: control, portfolio optimization, and more.