I've got a purely academic question about SQLite databases.

I'm using SQLite.internet to utilize a database during my WinForm project, so that as I had been establishing a brand new table, I acquired to taking into consideration the maximum values of the ID column.

I personally use the IDENTITY for my [ID] column, which based on SQLite.net DataType Mappings, is the same as DbType.Int64. I normally start my ID posts at zero (with this row like a test record) and also have the database auto-increment.

The utmost value (Int64.MaxValue) is 9,223,372,036,854,775,807. For my reasons, I'll never even provide simple facts on reaching that maximum, but what goes on inside a database that does? While attempting to educate yourself about this, I discovered that DB2 apparently "systems" the worthiness around towards the negative value (-9,223,372,036,854,775,807) and batches after that, before the database can't place rows since the ID column needs to be unique.

Is what goes on in SQLite and/or any other database engines?

I doubt anybody knows without a doubt, if millions of rows per second had been placed, it might take about 292,471 years to achieve the wrap-around-risk point -- and databases have been in existence for any small fraction of this time (really, so has Homo Sapiens-).

IDENTITY isn't really the best way to auto-increment in SQLite. That will need you need to do the incrementing within the application layer. Within the SQLite spend, try:

create table bar (id IDENTITY, name VARCHAR);
insert into bar (name) values ("John");
select * from bar;

You will find that id is just null. SQLite doesn't give any special significance to IDENTITY, so it's essentially an regular (untyped) column.

However, should you choose:

create table baz (id INTEGER PRIMARY KEY, name VARCHAR);
insert into baz (name) values ("John");
select * from baz;

it will likely be 1 when i think you anticipate.

Observe that there is also a INTEGER PRIMARY KEY AUTOINCREMENT. The fundamental difference is the fact that AUTOINCREMENT guarantees secrets will never be used again. If you remove John, 1 should never be used again like a id. In either case, if you are using PRIMARY KEY (with optional AUTOINCREMENT) and exhaust ids, SQLite should really fail with SQLITE_FULL, not cover.

By utilizing IDENTITY, you need to do open the (most likely irrelevant) likelihood that the application will improperly cover when the db were ever full. This really is fairly simple, because IDENTITY posts in SQLite holds any value (including negative ints). Again, try:

insert into bar VALUES ("What the hell", "Bill");
insert into bar VALUES (-9, "Mary");

Each of individuals are completely valid. They'd be valid for baz too. However, with baz you are able to avoid by hand indicating id. This way, there should never be junk inside your id column.

I can not talk to any sort of DB2 implementation logic, however the "coverInch behavior you describe is standard for amounts that implement signing via two's complement.

For an amount really happen, that's completely up in mid-air regarding the way the database would handle it. The problem arises in the time of really CREATING the id that's too big for that area, as it is unlikely the engine internally utilizes a data type in excess of 64 bits. At that time, it's anybody's guess...the interior language accustomed to develop the engine could provide, the amount could quietly cover and merely result in a primary key breach (presuming that the conflicting ID been around), the planet could ended because of your overflow, etc.

But pragmatically, Alex is correct. The theoretical limit on the amount of rows involved here (presuming it is a one-id-per row and never any kind of spouse identity place shenanigans) would essentially render the problem moot, as when you could certainly enter that lots of rows at a stupendous insertion rate we'll all dead anyway, therefore it does not matter :)