如何用多个优化器优化不同的参数?

import jittor as jt

jt.set_seed(42)

x = jt.array(1.0)
y = jt.array(2.0)

optimizer_x = jt.optim.SGD([x], lr=0.01)
optimizer_y = jt.optim.SGD([y], lr=0.01)

print("初始值: x =", x.data, "y =", y.data)

for i in range(100):
    z = x + 2 * y
    
    loss = (z - 10.0) ** 2
    
    # 更新参数
    # ?
    
    if (i + 1) % 10 == 0:
        print(f"迭代 {i+1}: x = {x.data}, y = {y.data}, z = {z.data}, loss = {loss.data}")

print("\n最终结果:")
print("x =", x.data)
print("y =", y.data)
print("z = x + 2y =", (x + 2 * y).data)

例如上面这样一个代码, 如何分别用两个优化器计算x和y的梯度并进行优化?

我参考jittor的GAN实现代码, 发现计算两次损失分别优化x和y是可行的, 但有没有更好的方案?

另外, 由于两部分参数需要分别使用 SGD 和 AdamW, 必须分开优化

import jittor as jt

jt.set_seed(42)

x = jt.array(1.0)
y = jt.array(2.0)

optimizer_x = jt.optim.SGD([x], lr=0.01)
optimizer_y = jt.optim.AdamW([y], lr=0.001)

print("初始值: x =", x.data, "y =", y.data)

for i in range(100):
    z = x + 2 * y

    loss = (z - 10.0) ** 2
    optimizer_x.step(loss)

    loss = (z - 10.0) ** 2
    optimizer_y.step(loss)
    
    if (i + 1) % 10 == 0:
        print(f"迭代 {i+1}: x = {x.data}, y = {y.data}, z = {z.data}, loss = {loss.data}")

print("\n最终结果:")
print("x =", x.data)
print("y =", y.data)
print("z = x + 2y =", (x + 2 * y).data)

根据源码尝试, 发现这样可以同时保留对 x 和 y 的梯度, 这种写法是否正确?

import jittor as jt

jt.set_seed(42)

x = jt.array(1.0)
y = jt.array(2.0)

optimizer_x = jt.optim.SGD([x], lr=0.01)
optimizer_y = jt.optim.SGD([y], lr=0.01)

print("初始值: x =", x.data, "y =", y.data)

for i in range(100):
    z = x + 2 * y
    loss = (z - 10.0) ** 2
    optimizer_x.backward(loss, retain_graph=True)
    optimizer_y.backward(loss)
    optimizer_x.step()
    optimizer_y.step()
    if (i + 1) % 10 == 0:
        print(f"迭代 {i+1}: x = {x.data}, y = {y.data}, z = {z.data}, loss = {loss.data}")

print("\n最终结果:")
print("x =", x.data)
print("y =", y.data)
print("z = x + 2y =", (x + 2 * y).data)