Data science
fromHackernoon
2 months agoHow to Reduce Majority Bias in AI Models | HackerNoon
This work explores the inductive biases of fair learning algorithms and proposes a robust optimization scheme to enhance demographic parity.
In our experiments, we applied the SA-DRO algorithm to the DDP-based KDE fair learning algorithm proposed by [11], and RFI proposed by [13]. We kept the fairness regularization penalty coefficient to be λ = 0.9. The DRO regularization coefficient can take over the range [0, 1], in this table, we set ϵ = 0.9 for SA-DRO case.