If you don't mind having a more static table size, you can also do:<p><pre><code> create table test_table( id integer identity(1), val varchar(max) )
-----
create trigger auto_updater on test_table after insert as
begin
declare @id integer = (select id from inserted)
declare @val varchar = (select val from inserted)
update test_table
set val=@val
where id=@id%10+1
if @id >= 10
delete from test where id=@id
end
</code></pre>
Then to insert you just insert. It'll auto-override the correct entry.<p><pre><code> insert into test_table(val) values ('hooray')
</code></pre>
insta-looping for free (syntactically). Though I did write this in t-sql, I'd assume one can pull the same trick in MySQL (identity => autoincrement). You can also have it timestamp while you update.
It's a nice solution, but doesn't seem to solve the problem they present at the start of the article. "What we wanted was a way to keep at least 24 hours worth of entries at all times"; but their solution only stores the last MAX_CIRCULAR_LOG_ROWS entries.
Wouldnt it be faster to have the log_id in a variable so you don't have to do a select and an update? In fact wouldnt it be faster to just store 48 hours and delete the oldest 24? Is it going to kill you to have 47:59 hours of logs?