Path: news.daimi.aau.dk!news.uni-c.dk!sunic!sunic.sunet.se!news.funet.fi!news.eunet.fi!EU.net!Germany.EU.net!Dortmund.Germany.EU.net!Informatik.Uni-Dortmund.DE!news From: grossjoh@schroeder.informatik.uni-dortmund.de (Kai Grossjohann) Newsgroups: comp.lang.beta Subject: Data on secondary storage Date: 24 May 1995 11:04:05 +0200 Organization: University of Dortmund, Germany Lines: 28 Message-ID: Reply-To: Kai Grossjohann NNTP-Posting-Host: schroeder.informatik.uni-dortmund.de X-Newsreader: (ding) Gnus v0.75 Hi there, I am to store large amounts of data on secondary storage. The data consists of tuples. A number of tuples are stored together in a relation. There are several relations. The operations needed are mostly insert and delete operations and an iteration over all tuples in a relation (via get_first and get_next operations, presumably). We will be storing more than a gigabyte of data this way. I have so far looked at a number of options. - PersistentStore does not provide the ability to delete objects from main memory when one is done with them. - OODB requires that the whole database fit into main memory. There is a C interface to Beta, so I suppose it should not be too difficult to use any C library. However, before I do that, I would like to know if anyone has dealt with something like this before. Please note that we do not need transaction mechanisms nor locking nor support for concurrency. Nor do we need a client/server architecture. tia, \kai{} -- Life is hard and then you die.