I remember dBASE IV from my childhood days when my father, who had no computer background, was required to take computer training by his workplace. My father and his colleagues were given free evening computer lessons by their company, taught by the same teachers who used to teach us, the kids, computers in our school.
After their first class, he brought home a fat dBASE IV manual. Since I was very interested in computer books, I read a good portion of it even though I had never touched dBASE in my life. I would daydream of all the little forms, queries, reports and labels I could make with dBASE. But I never got to touch dBASE in my life. We kids used to get LOGO lessons instead in school.
One day my father came back from his evening lesson mildly distressed about something he had learnt. He said they were being taught loops but in the loop there was an equation that seemed just plain wrong. It was:
i = i + 1
How could that be a valid equation? How could i ever equal i + 1? He mentioned that he had asked the teacher about it and from what I could gather, my teacher and my father were talking past each other. The teacher probably tried explaining that it was not an equation but an instruction instead, whereas my father continued to interpret i = i + 1 as an equation due to the algebra he was so familiar with. It sort of held up the class for a while.
The teacher asked my father's name, perhaps so that he could talk to him separately later. But when he learnt my father's name, he realised that his son, me, went to the same school where he taught. So he told my father, 'When you get back home, ask your son about i = i + 1. He will explain it to you better than I am able to.'
And indeed I was able to explain it to him pretty well. I was eight or nine years old back then. And that was probably the first thing I taught my father!
The = for assignment is FORTRAN's fault. In the beginning there was no equality, just assignment and FORTRAN (being just a FORmula TRANslator after all) made the somewhat dubious decision to use = for that (punch card space being sparse and symbols limited and all).
When FORTRAN gained equality it went for .EQ. out of practicality and necessity. Many others followed suit but used the somewhat more pleasant == instead of .EQ.
But it didn't have to happen that way. ALGOL decided to stick close to mathematical tradition:
= for equality
:= for definition (assignment)
While x := x + 1 is still not clean mathematical notation I think it wouldn't have riled up OP's father as much. If you squint enough you might even see little indices there.
Think that was just to make the parser understand it was an assignment without having to do any lookahead. There was also possibly ambiguous stuff with equality and assingment as they both used =.
Some of my earliest programming exposure was a dBASE IV book my dad had for work, though it was some time before I put any of it into action. At that time I was reading manuals like fiction, only slowly realizing that I could actually use some of it with our computer.
It's a notational issue. IIRC Pascal used := for assignment and = for equality testing.
Where this becomes extremely Rorsarch is the spectrum between "notation is absolutely critical: there is only one correct representation of programs in people's heads and we have to match that exactly" vs. "all program text is ultimately syntactic sugar and programmers will just adapt to whatever". History tells us that the C choice of = for assignment and == for equality testing won, but of course that's not a choice in a vacuum and it's tied up with a thousand other choices.
While I'm a big fan of immutable design, it makes some algorithms much more expensive and ultimately DRAM is mutable. And the example we're talking about could be a loop counter!
Don't forget DataEase [1]. That's what I eventually moved to from dBase (although, IIRC, it was through back-and-forth evaluations of FoxPro, Clipper, and Paradox). DataEase was considered a "Fourth-Generation Language" (4GL) and it was wonderful to work with. As a teenage "systems analyst" working for a division of GE (my first paid tech job), I built a file room management system for their large file rooms (remember those?). Having to thoroughly test security, I put it through its paces and found a way to hack into any application built on DataEase. Eventually explaining the procedure to DataEase's development team (which included one developer traveling to my fraternity house for a face-to-face meeting; so funny trying to be business-like in a place with sticky floors and smelled like stale beer), they fixed the hole. There weren't any bug bounties during those days but, as a reward, they gave me lifetime upgrades and allowed me to go to all their training seminars for free. It was my 4GL experience that ultimately led to learning Cognos.
Funny aside: I remember the first time my GE boss asked me for an invoice as it was the only way he could pay me. I had no idea what it should look like. So he sent me to the PM of one of the COBOL contractor teams who gave me a template that I copied. The PM eventually asked me to do some COBOL programming for them as well. Good times.
I find this blog consistently negative (check other posts.) Although this post is interesting, and I know little about dBase and this is a sad story, I am simply not sure how accurate all of the blog as a whole is. I can best suggest, take its statements as someone's personal opinion, not necessarily as fact, ie but with a pinch of salt.
> It is believed that - alongside the BOLD source code (missing for more than 10+ years), the BDE and many original dBase source code was lost during the ill fated Borland + Corel merger (which was eventually called off).
This is confusing because the article is supposedly about dBase, and I have no idea why Bold is relevant. It's an example of where I feel the general negativity of the blog veers into random discussions.
To the best of my knowledge, the Bold source was not lost. In fact, Embarcadero open sourced it several years ago. The blog post has details: https://blogs.embarcadero.com/bold-for-delphi-is-open-source... I worked there at the time though I did not drive its open sourcing, but it is a positive move, and clearly contrasts the blog's statement. It appears actively maintained and updated these days. I would differentiate 'lost' from 'owned but not made available publicly'.
My reading is that the BOLD source was lost and found but that BDE and some parts of dBase were lost completely, evidenced by them including an unmodified version year after year. Perhaps the author thought people were more aware of the BOLD source being lost.
>By feeding legacy PRG (circa 1985) and logics to models like Claude, ChatGPT, developers can now instruct the AI to translate decades-old dBase PRG directly into memory-safe Rust, highly concurrent Go, or modern Dart/Flutter cross-platform applications.
And it alludes to this early on, but it doesn't show any examples.
I'm not sure what the article suggests - create a custom rust program that reads and writes to a given dbf file? Create a rust program that mirrors the PRG code, writing/reading data in a custom format?
I still maintain a VFP9 project from time to time.
Although AI has been extremely helpful in writing VFP9 code, I can't imagine migrating this enormous project, which has grown over the course of 30 years, to a more modern system by feeding the source code to AI.
While one could debate which approach would be best for migrating such a project, an 'AI-led Big Bang Migration' would be insane.
However, AI would certainly be helpful for migration.
If you're ever looking to migrate off of that a better starting point is finding a way to dump the DBF files into CSV (there's a perl script for this that works wonders called DBF2CSV but I hear LibreOffice can just open these files too)...
After inheriting a project where the source code CDROM went missing I can definitely see a use case for at least trying with the latest frontier models to rescue the logic because it took me a while to reverse that thing manually to fix a bug with radare2
Microsoft Access 2.0 had filters to import and export data from and to DBF files. We used this in WFW 3.11 to convert from DBase to MS-Access and later on SQL Server.
There were some Turbo C and Turbo Pascal source code that read DBF files, but hardly anyone used them. Most stored data is in text files that can be read by any application.
MS Access can use DBF files almost as if they were standard Access tables. This was particularly useful when working with ESRI Shapefiles, as it allowed the DBF files to be edited in Access and the changes to be viewed directly in ArcGIS. When editing maps, Access was often more convenient than the ESRI Editor.
I want to say no. As a way of working those dbms systems were a dead end. Not every problem is database tables and having had a job replacing a dBase III system I never want to see it or its ilk again
80% of everything is crap anyway, no matter which tech stack. But I think something was lost, not everything is a database, but ever since Microsoft started ignored MS Access, nothing is a database. Or rather, Excel is used as a database. That can't be good either.
Oh 100% agree on Excel - it's no substitute for those dBase/Clipper/Fox systems.
Y'know what? It's probably true that niche needs filling again as long as it isn't the dBase file format. I had to deal with one system that blew the documented max file size for dBase III but for some bizarre reason, the original dBase III executable didn't care.
However, you couldn't load it with any of the ODBC drivers it would fail. Except for one obscure Sybase based driver I have forgotten the details of.
One of my first jobs had a legacy application that was based on Clipper. I never had to work with it directly (since I worked on the SQL successor) but reading the source code was still fascinating.
One of my favourite DB systems, started with dBase III+ where our teacher made us enter the high-school library records, followed up with Clipper Summer '87, and shortly thereafter Clipper 5.x with its OOP extensions.
Great productivity tool, garbage collected, compiled, in the constrained environment of MS-DOS PCs.
The migration to Windows 3.1 took too much time, giving time to FoxPro, Access, Visual Basic and Delphi to establish themselves to the same programming communities.
Similar to other HNers, Clipper was also how I made my first attempts to working for others during high school.
Ah, Clipper was a major force in the late ’80s in Yugoslavia, as economic reforms enabled the widespread establishment of private companies that needed accounting software, and PCs became cheap enough for one-person shops to develop custom accounting solutions. It was the Wild West for a few years, with a zillion different applications, until some bigger players emerged.
IIRC, it needed one or two 360K floppies for a full install (a pirated copy; maybe the legal distribution was larger - at that time, all software was pirated). Compiling was fast (on a computer where you type dir and can read the filenames appearing on the screen faster than the computer can print them), but linking was slow, so everyone replaced MS Link with Borland’s TurboLink, which was an order of magnitude faster. It didn’t support overlays, but there were ways to work around that.
There was also documentation available in some third-party TSR app.
Later, another linker became popular: Blinker, which had a bunch of interesting features, such as loading overlays into EMS memory and providing various security functions to help protect your software. But by that time, the writing was already on the wall for DOS.
Funnily enough, many customers actually preferred DOS, since navigating with the keyboard was far faster than using a mouse, and Windows apps generally weren’t designed with keyboard navigation in mind.
Ah, Blinker! Never used it, but remeber the ads in magazines.
Same in Iberian penisula regarding software acquisition, even during university, the same copy centers for books, also offered catalogs of which software we would like to have, or street baazars even, only in the 2000's the goverment (in Portugal) actually started hunting down those practices.
I remember my father built its own personal accounting tool using dbase, I think it was MSDOS at the time, I was a kid. Quite the achievement I think, he was not a software engineer, just hobbyist.
I feel the timeline is wrong re when dBase Inc took over. I remember working as a consultant on shipping new features for dBase back in 2000 or so.
I implemented reflection for the dBase language and was also part of trying to convert it to Visual C++ instead of using the Borland compiler. I was very green back then but it was interesting, my only time dealing with interpreters / compilers
In 1998 I wrote a financial summary for our ERP system (EUROnet) running on MS DOS with a dBase db in the backend. I've connected the dBase to a PHP 3 web server with Apache 1 and then summarized the sales data. My boss loved it. He could see numbers which are not implemented in the ERP reports.
My first gig at 18 was managing my university library's database (in dBase III; it was the 1980s) and writing the user interfaces for searching. This was a pre-SQL database for you youngins in case you have no idea what I'm talking about.
One of my first sizeable projects was a COM-compatible compiled language with .dbf support primitives for data transformation. As a unique quirk, it could even work on Novel Netware to interface with Btrieve.
Netware supported loading PE executables, but it lacked memory protection so developing for it was... fun.
The .dbf format was pretty straightforward, though.
After their first class, he brought home a fat dBASE IV manual. Since I was very interested in computer books, I read a good portion of it even though I had never touched dBASE in my life. I would daydream of all the little forms, queries, reports and labels I could make with dBASE. But I never got to touch dBASE in my life. We kids used to get LOGO lessons instead in school.
One day my father came back from his evening lesson mildly distressed about something he had learnt. He said they were being taught loops but in the loop there was an equation that seemed just plain wrong. It was:
How could that be a valid equation? How could i ever equal i + 1? He mentioned that he had asked the teacher about it and from what I could gather, my teacher and my father were talking past each other. The teacher probably tried explaining that it was not an equation but an instruction instead, whereas my father continued to interpret i = i + 1 as an equation due to the algebra he was so familiar with. It sort of held up the class for a while.The teacher asked my father's name, perhaps so that he could talk to him separately later. But when he learnt my father's name, he realised that his son, me, went to the same school where he taught. So he told my father, 'When you get back home, ask your son about i = i + 1. He will explain it to you better than I am able to.'
And indeed I was able to explain it to him pretty well. I was eight or nine years old back then. And that was probably the first thing I taught my father!
When FORTRAN gained equality it went for .EQ. out of practicality and necessity. Many others followed suit but used the somewhat more pleasant == instead of .EQ.
But it didn't have to happen that way. ALGOL decided to stick close to mathematical tradition:
= for equality
:= for definition (assignment)
While x := x + 1 is still not clean mathematical notation I think it wouldn't have riled up OP's father as much. If you squint enough you might even see little indices there.
Where this becomes extremely Rorsarch is the spectrum between "notation is absolutely critical: there is only one correct representation of programs in people's heads and we have to match that exactly" vs. "all program text is ultimately syntactic sugar and programmers will just adapt to whatever". History tells us that the C choice of = for assignment and == for equality testing won, but of course that's not a choice in a vacuum and it's tied up with a thousand other choices.
Funny aside: I remember the first time my GE boss asked me for an invoice as it was the only way he could pay me. I had no idea what it should look like. So he sent me to the PM of one of the COBOL contractor teams who gave me a template that I copied. The PM eventually asked me to do some COBOL programming for them as well. Good times.
[1] https://www.dataease.com/
> It is believed that - alongside the BOLD source code (missing for more than 10+ years), the BDE and many original dBase source code was lost during the ill fated Borland + Corel merger (which was eventually called off).
This is confusing because the article is supposedly about dBase, and I have no idea why Bold is relevant. It's an example of where I feel the general negativity of the blog veers into random discussions.
To the best of my knowledge, the Bold source was not lost. In fact, Embarcadero open sourced it several years ago. The blog post has details: https://blogs.embarcadero.com/bold-for-delphi-is-open-source... I worked there at the time though I did not drive its open sourcing, but it is a positive move, and clearly contrasts the blog's statement. It appears actively maintained and updated these days. I would differentiate 'lost' from 'owned but not made available publicly'.
>By feeding legacy PRG (circa 1985) and logics to models like Claude, ChatGPT, developers can now instruct the AI to translate decades-old dBase PRG directly into memory-safe Rust, highly concurrent Go, or modern Dart/Flutter cross-platform applications.
And it alludes to this early on, but it doesn't show any examples.
https://github.com/infused/dbf/
I'm not sure what the article suggests - create a custom rust program that reads and writes to a given dbf file? Create a rust program that mirrors the PRG code, writing/reading data in a custom format?
While one could debate which approach would be best for migrating such a project, an 'AI-led Big Bang Migration' would be insane.
However, AI would certainly be helpful for migration.
After inheriting a project where the source code CDROM went missing I can definitely see a use case for at least trying with the latest frontier models to rescue the logic because it took me a while to reverse that thing manually to fix a bug with radare2
They could have still been the king of the hill now if it weren't for the suits who completely ruined it after Philippe Kahn left the scene.
Strange it is not cited in the post.
There were some Turbo C and Turbo Pascal source code that read DBF files, but hardly anyone used them. Most stored data is in text files that can be read by any application.
The enterprise had to declare me as an apprentice for 'trade jobs', as it was against the law to give a regular salary to someone under 16.
I remember my first paycheck with deductions for retirement, which pissed me off quite a bit.
I think the main gist: you work not as app developer but as db developer, is something that is missing in some partial attempt like access and such.
BTW: Wanna join me or help?
Y'know what? It's probably true that niche needs filling again as long as it isn't the dBase file format. I had to deal with one system that blew the documented max file size for dBase III but for some bizarre reason, the original dBase III executable didn't care.
However, you couldn't load it with any of the ODBC drivers it would fail. Except for one obscure Sybase based driver I have forgotten the details of.
Just couldn't deal with it again I don't think.
Semi off-topic: The wikipedia article on Ed Esber is in dire need of a clean up https://en.wikipedia.org/wiki/Ed_Esber
Great productivity tool, garbage collected, compiled, in the constrained environment of MS-DOS PCs.
The migration to Windows 3.1 took too much time, giving time to FoxPro, Access, Visual Basic and Delphi to establish themselves to the same programming communities.
Similar to other HNers, Clipper was also how I made my first attempts to working for others during high school.
IIRC, it needed one or two 360K floppies for a full install (a pirated copy; maybe the legal distribution was larger - at that time, all software was pirated). Compiling was fast (on a computer where you type dir and can read the filenames appearing on the screen faster than the computer can print them), but linking was slow, so everyone replaced MS Link with Borland’s TurboLink, which was an order of magnitude faster. It didn’t support overlays, but there were ways to work around that.
There was also documentation available in some third-party TSR app.
Later, another linker became popular: Blinker, which had a bunch of interesting features, such as loading overlays into EMS memory and providing various security functions to help protect your software. But by that time, the writing was already on the wall for DOS.
Funnily enough, many customers actually preferred DOS, since navigating with the keyboard was far faster than using a mouse, and Windows apps generally weren’t designed with keyboard navigation in mind.
Same in Iberian penisula regarding software acquisition, even during university, the same copy centers for books, also offered catalogs of which software we would like to have, or street baazars even, only in the 2000's the goverment (in Portugal) actually started hunting down those practices.
I implemented reflection for the dBase language and was also part of trying to convert it to Visual C++ instead of using the Borland compiler. I was very green back then but it was interesting, my only time dealing with interpreters / compilers
Netware supported loading PE executables, but it lacked memory protection so developing for it was... fun.
The .dbf format was pretty straightforward, though.