Separate speaker notes to accompany the random update presentation (randomupdtproc):

 

Slide #1:

This slide presentation deals with the logic of random updates.

Slide #2:

Maintenance updating is used to add, change and delete records from a file in the traditional sense of updating.

Production updating is when the file changes because of the daily processing that occurs. Production updates usually have just change transactions.  For example a production update would deal with receipts into inventory and sales from inventory where a maintenance update would be concerned with adding new items to inventory, changing the price of an item or deleting an item from inventory.

Sequential updating is useful when there is a large percentage of hits between the two files.  If you have additions, changes or deletions for the majority of records on the file it makes sense to process sequentially.  If you have a small percentage of hits, that is you will be adding, changing or deleting a small number of records then a random update is more effective.

Random updating is transaction driven which means when the transactions are processed, the program ends. This is because the master is randomly retrieved for processing because of a transaction.  Contrast this with sequential processing that continues until you both the transaction file and the old master file have reached end of file.

Slide #3:

The random update needs to have the data edited, but it does not have to be sorted since the program can handle the transactions in any random order that they come in.  If the program has transactions coming in from a screen instead of from a disk file, the program will have to include editing components.

Slide #4:

Notice that the Master is an I-O file which means changes are being made directly to the master.  This requires that backup of the master be done prior to running the random update.  The backup procedure is frequently executed right after the master has been update.  right before the update program is to be run or after any checking that is done after the update of the master.

Slide #5:

The logic of a random update involves attempting to match the record on the transaction with a record on the indexed file.  Frequently this involves doing a random read to find out if the record exists or doesn't exist.  Based on the results, processing is done.

Other techniques can be used that involves processing and reacting to invalid key responses or not invalid key responses.

Slide #6:

Note that the reads of both the master file and the transaction file are handled in procedures or subroutines.  The reason is that I/O (reading and writing) generate a lot of machine code and doing them just once is more efficient. In addition, including the checking for success is more clearly shown separately.

Transactions are being read sequentially.  After the read, we check to see if EOF has been reached which means end of processing.

The master is being read randomly because of a transaction.  When we attempt to read the master we need to have a check to find out if the read was successful and a master was there that matched the id/key that the transaction called for.

Again, this processing is transaction driven. 

Slide #7:

Establish the key indicates that the programmer must do whatever the language requires to allow them to use the key to randomly access the file.

Remember, success means that a master was successfully read.

On this slide I first check to see if there is a master.  If there is, a change and a delete are valid.  An add is an error since the record already exists.  If there is not a master, the add is valid but the change and delete are errors because I cannot change or delete a record that is not there.

Slide #8:

Note that the two master files are the same file.  The red file to the left is prior to running the update.  The brown master is as the update is showing the changes that are made - the results of the update.  The after is the file that is kept.

 Note that the transactions do not have to be in order.  We will see this on future slides.

Slide #9:

Note that alternate logic for the ADD is to attempt the WRITE and let the invalid key clause pick up an attempt to write a duplicate record.  If the invalid key clause is not triggered the ADD will be successful.

Note that the master files (before and after) start out looking the same.  Changes will be made to existing records, records will be added and deleted.  Records on the master with no activity will just stay there.

Slide #10:

The changes on the transaction are made on the master in memory and then the master is rewritten on the master file, thus making the changes.

Slide #11:

The second change transaction for the master record 222 is read and processed.  Notice that there is no acknowledgement that there has already been a change.  We do not require that the transactions be sorted so each one is dealt with individually. 

Slide #12:

The delete removes the record from the Master file.  Note that different languages and systems handle physical deletion differently.  The main thing is that the record is no longer available.

Slide #13:

In this example we are attempting to change record 350.  Since that record does not exist on the master, no change can be made.  This is an error.

Slide #14:

Since the add transaction has a match on the master, the record can not be added since it would cause a duplicate identification number. 

Again, this could have been handled by attempting the WRITE and letting the invalid key catch the duplicate instead of attempting to find the match by reading the master prior to attempting the write.

Slide #15:

In this case, we are attempting to delete a record that does not exist.  Obviously this cannot be done.

An alternative would be to attempt to do the DELETE without checking and letting the invalid key catch the fact that no record exists to delete.  The problem with this is that if indeed you find the match and do the delete, you are doing a blind delete.  Most logic calls for some kind of viewing of a record prior to deletion for confirmation. 

Slide #16:

In this example, the transactions are not sorted.  They are the same transactions as used in the previous example.  If the transactions are sorted, then all of the transactions of one Id are grouped together.  This might have advantage in some circumstances.  With our processing it does not matter.

Slide #17:

This continues the example using unsorted transactions.  The results will be the same, but the order of transactions on the error/trail report will be in the order that the transactions were processed rather than in sequence.  This is one argument for sorting the transactions.

Slide #18:

I have hopefully shown that the transactions do not have to be in order.  I am only going to process the first three transactions in this way.