Ask HN: How to mitigate MySQL data corruption after upgrade? (BEWARE 8.0.31)<p>We got bitten by an upgrade MySQL 8.0.31: https://repost.aws/questions/QUZZaycd4OSY2k8iLNVFuXvA/data-corruption-with-rds-my-sql-8-0-31 -- and that version is still live as of now, so beware.<p>MySQL 8.0.29 had issues so bad that it was pulled: https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-29.html<p>So multiple recent MySQL versions have had data corruption bugs that got through all of their validation, as well as validation that cloud providers such as AWS RDS do before providing it to upgrade.<p>The only solution to a corruption issue that gets through validation is bad, either:
- Point In Time Recovery back to before the upgrade, which means an outage and loosing data since the upgrade
- mysql_force_recovery, mysqldump, and build a new server in an older version<p>Is there any way to mitigate against a data corruption issue in the first place? My first thought would be to leave a read replica with the older version so it can be promoted in case the primary database crashes, but that's noted as "not supported" by MySQL and RDS forces upgrading the replicas before the primary, so even though it might work, it's not a solution on RDS.<p>Any other possibilities I'm not thinking of?
we use RDS Mysql with read replica and multi-az, we got bitten too with corruption problem on mysql 8.0.31 version...<p>In our case multi-az failover doesn't work and our salvation was promote read replica as new master... the problem doesn't replicate for the replica (i don't know why) but if you try failover you lose your master because it stays in reboot status forever...<p>our solution was the same you told, downgrade to 8.0.28.<p>Anyone with same problem?