Skip to main content

SQL is Dead (NOT), Long Live SQL!

I participated in an Oracle Academy Ask the Experts webcast in November 2014, with the title "The Code Under the Covers, aka the Database Under the App."

The objective of the webcast was to encourage professors to teach, and students to learn, SQL (the Structured Query Language) and PL/SQL (Procedural Language extensions to SQL). In preparing for the talk, I found myself thinking about Platonic Idealism. Strange, eh?

So I thought I'd share some of these thoughts on my blog. I hope you find them interesting enough to share and comment.

First, it is very much worth noting that while SQL is "old" - the first commercial SQL database was released by Oracle waaaay back in 1979 -  it's not going anywhere. In fact, it is getting more important than ever, more entrenched, and more widely used.

And I think that part of the reason for the stickiness of SQL has to do with the aforementioned Platonic Idealism, or to be more precise, it has to do with how different the underlying philosophy of SQL is from that of Platonic Idealism.

Platonic Idealism is summarized on Wikipedia as follows:

"A particular tree, with a branch or two missing, possibly alive, possibly dead, and with the initials of two lovers carved into its bark, is distinct from the abstract form of Tree-ness. A Tree is the ideal that each of us holds that allows us to identify the imperfect reflections of trees all around us."

Surely you remember that from your Uni philosophy course? I did, sort of. And then the following idea came to me a few weeks ago: 

Wait a minute! Platonic idealism sounds an awful lot like like object-orientation. You have a class, which is the "perfect" and "ideal" thing. Then you have instances of that class and sub-classes of that class, all of which may vary in some way from the ideal. 

Don't they seem quite similar? The Platonic Ideal and the Class. It was even mentioned at Wikipedia

"The language for much of the article talks about 'instantiations', inherence, forms, etc... Sounds very much like inheritance/etc... from computer science. Perhaps this is deliberate, perhaps written by a comp. sci. person, perhaps it's totally my perception."

So here's the thing: I don't believe that humans see the world according to Platonic Idealism. In other words, while object orientation may have some value inside computer systems, I don't think that humans exist in the world in an O-O fashion.

When I stand in the forest down the street from my house and look at the trees, I don't see them as variations from some ideal Tree. There is no such thing in the world or in my head. Instead, I see lots of discrete entities that share characteristics.

Here's another way to put it:

I ingest data through my senses (see the different kinds of bark on the trees, hear the winds rustling through the leaves, taste the air blowing off the river), and my brain identifies patterns. It then uses those patterns to develop strategies for surviving, reproducing and thriving. 

So I can look at an invasive buckthorn (which I cut down by the hundreds each week) and a big, old oak tree and think to myself: "They are both trees." Which really means that they can be grouped together by common attributes.

They are, in short, a set.

I believe that humans naturally think about sets of things in the world as a basic, evolved mental strategy for getting by in the world. 

And if that is true, it is very easy to see why SQL was such a remarkable breakthrough back in the 70s (kudos to Codd, Date, IBM research labs, and Ellison). And why it played (and plays) such a critical role in the Information, Internet, Mobile and Internet of Things Eras.

SQL synchs up so well with how our brain naturally operate that it is hard to imagine another data language that is different enough to supplant it. A much more likely scenario is that the SQL language will be changed to meet new requirements (as will the underlying SQL engines, such as Oracle Database, MySQL and so on).

I'm not really arguing that SQL is "forever." I expect that at some point, the whole computing paradigm will shift in ways we can't even imagine today, and SQL then becomes irrelevant somehow.

But as long as we write code the way we do today, still build apps the way we do, still need databases to hold data, we'll find ourselves relying on an ever-more-powerful SQL language to manipulate our data.

SQL is Dead (NOT), Long Live SQL!

Comments

  1. Hello Steven,

    This is a very interesting approach to why SQL looks to most of us so much more natural than OOP.

    Or, at least to those old ones (like me !), who started a long, long time ago, before SQL was even born.

    After many years of "classic programming", with data stored in O/S files,
    learning SQL was like a fresh breath, because it allowed you "to jump"
    anywhere "into" that "data collection" and retrieve exactly what you want, with a simple and straight-forward statement, without writing lots of code to describe the file, then read it, usually sequentially, to get where you want.

    For some reason, our brain was trained to think by the rows-and-columns pattern, be it stored in a file or in a relational table, so what can be more natural than SQL ?

    Well ... features like the MODEL clause, MATCH_RECOGNIZE, JSON, all the tricky XML stuff, were not yet there .....

    But your post immediately reminded me of a discussion that I had a long time ago with one of my colleagues, who had developed for many years using a hierarchical database and a specific programming language for it, as far as I remember, a kind of "ancestor" of Magic, a product that replaced it afterwards.

    That colleague was strongly pretending that our real world is entirely hierarchical,
    and that the relational paradigm in programming was something completely artificial ... and for me it was very hard to understand his point
    ( well .... besides the "human hierarchies" that we all equally hate ... ).

    And now to the (still modern, at least for me), OOP ... which looks to me as a nice idea, but I still find it hard to comprehend how exactly can we process masses of data with this approach ... where do we put that data before and after processing ? How exactly do we store physically those "objects" ?
    And so on ...

    But, yes, your encouraging professors to teach and students to learn SQL is similar to the feeling I do have very often, when I always conclude that had SQL existed in my first years of learning, even in elementary school, I think that I would have probably enjoyed its elegance a lot.

    Maybe the natural conclusion is that for every person, what he/she was taught and trained to do, including thinking patterns, seems the most natural thing ... and all the others are just perceived like artificially created tricks.

    It is sure that we just live at a very specific point in time, and things will probably change ... and, for sure, they will change more and at a faster pace that we will be able to comprehend.

    The philosophical question remains whether all those changes are indeed required for real life reasons, or many of them are just introduced because everybody wants "to pull the carpet" from under the feet of its predecessors/competitors and take away his market .... after all, everybody should make a living ....


    Thanks a lot & Best Regards,
    Iudith

    ReplyDelete
  2. I totally agree! When everyone thought the RDBMS and SQL were dead, they ran to alternative technologies. There was NoSQL and Hadoop. What is funny is after a couple of years of trying to use those and their native languages, many people and vendors started trying to bring SQL back and implement it into those technologies. SQL is one of the most important languages anywhere as you describe what you want to see and you leave it up to the underlying system to do the work in joining it all. SQL is definitely not dead, if anything it is as alive as it has ever been, especially after people realize how important and easy it is to use compared to alternatives. Sometimes you don't know how good you have it until something you really depend on is taken away. Long Live SQL!!!!

    ReplyDelete

Post a Comment

Popular posts from this blog

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post. PL/SQL is a strongly-typed language . Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type). PL/SQL offers a wide array of pre-defined data types , both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package). Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types. You can't really declare your own "user-defined" scalars, though you can define subtypes  from those scalars, which can be very helpful from the p

The differences between deterministic and result cache features

 EVERY once in a while, a developer gets in touch with a question like this: I am confused about the exact difference between deterministic and result_cache. Do they have different application use cases? I have used deterministic feature in many functions which retrieve data from some lookup tables. Is it essential to replace these 'deterministic' key words with 'result_cache'?  So I thought I'd write a post about the differences between these two features. But first, let's make sure we all understand what it means for a function to be  deterministic. From Wikipedia : In computer science, a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states.  Another way of putting this is that a deterministic subprogram (procedure or function) has no side-effects. If you pass a certain set of arguments for the parameters, you will always get

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,