“Partitioning addresses key issues in supporting very large tables and indexes by letting you decompose them into smaller and more manageable pieces called partitions. SQL queries and DML statements do not need to be modified in order to access partitioned tables. However, after partitions are defined, DDL statements can access and manipulate individuals partitions rather than entire tables or indexes. This is how partitioning can simplify the manageability of large database objects.”
An Oracle database concept on VLDB (Very Large Databases).
Every OSS that I’m aware of is back-ended by some form of database for storing the vast amounts of information they collect. There are all sorts of database efficiency mechanisms used by DBAs (Database Administrators), but you sometimes also need to take a closer look at the application design and ways that you’re modelling the network.
Years ago, I was modelling an ATM switch to allow customer services to be created across it. Each physical port supported a VPI range of (0-255) and each VPI supported a VCI range of (1–65535). I modelled all the possible port ranges so that any VPI/VCI variation could be assigned to any given customer. Without thinking, I then bulk-loaded all of these logical ports across the customer’s ATM network. I’m sure you’ll be clever enough to see why my data load was still going when I came back to work the next day. I was effectively loading nearly 17 million (256*65535) new logical ports for every single physical port in the ATM network. Doh! Do you think the ports table in the database was a little bit big after that blunder??? 🙂 Doing a SELECT or a simple implicit JOIN using this table would’ve bought things to a stand-still for a while.
Luckily it was only a data-mig environment and I was able to trash the ports table without causing any harm.
Rather than modelling every VPI/VCI variation, we didn’t pre-build any. Instead, we simply changed the service order activities to create the required VPI/VCI for each new customer before then allocating their service.
Ever since, I’ve been careful to trade off completeness of the model against keeping data to a minimum to help the efficiency from a database and application usability perspective.Read the Passionate About OSS Blog for more or Subscribe to the Passionate About OSS Blog by Email