Written in 365 Parts: 152: Mathematical Conundrum

Hooper smiled for the first time in days. The combining of points of datum into one massive database to conduct cross-comparative searches had been successful. They had it. Someone was manipulating data, and Hooper had the manner in which they were doing it, and could use that. It was a pattern that repeated in some widely different sensor readings that could only mean one thing. Someone had altered both of them in the same fashion. To be more precise the data was manipulated by the same algorithm,  likely a very complex one, and there were artefacts to indicate manipulation. 

The computers had been crunching streams of information for many hours. The first sign of a pattern was in air-flow data. The amount of air that is used in any section of the station was strictly monitored and regulated. The filters not only detected the amount of gases, but they scrubbed and filtered the air for particulates and airborne toxins, viruses and bacteria. The filters were analysed on a regular basis and their findings recorded.

They could record the amount of air used by any individual, or group of individuals, and compare it to their movements around the station. A good medical programme would detect an organics health patterns throughout the day and even detect and predict the spread of infections or diseases. Judiciary didn’t pay for that level of pre-emptive medical care. They did have the sensors, and record the information, though. Storage was cheap, medical intervention was not.

 Due to various factors that might affect the readings they were always adjusted to reflect an average distribution of information, and then a program allowed generous wiggle room for non-recorded trends. A room where people told jokes all afternoon would be vastly different to the same people the following day when they were quietly working. So there was a range of variation. You could hide a lot of data in that variation if you knew how to adjust the figures. 

The same was true for the sensors that recorded mass. Every plate on the floor of the building had a weight sensor. As did every lift, every gravitational belt, almost any surface that was in use in some manner. All energy usage and variations were observed and noted. Again there could be slight differences, the machines weighed items into the tenth of a gramme and a program ran corrective statistics based on trend. Eating a sandwich while walking, throwing the wrapper in a waste bin, would be immediately apparent as a shift in mass between being in a room and then going into a lift. So a range of variation was allowed. If you were clever you could smuggle or move items, even replace one person for another, if you could manipulate this data.

More importantly you could do the same with the load lifters, elevators and fuel readings on the shuttles. It was a matter of knowing what the sensors recorded, what they matched it to, and what the trend variances would allow. Hooper now knew all of this from the research of the last many hours. It was clear that so did the quarry. 

What the computers found by analysing every record and cross matching them was a statistical variance in the numbers that was within the trend but had a pattern. It was all about the random. You could generate a truly random variation. It would be difficult and require a very powerful computer, but you could do it. But the random numbers being selected here had to fall within a trend. They also had to fall within a close enough variation as to be not too distant to what is expected. The trend could not always be at an extreme point or swing wildly. Over time, with the large number of datum points collected, this led to a repeat of practice. The algorythm’s constraints of range led to a cycle through the possible variations. So the trend distribution differences in fuel cell depletion of a shuttle journey twenty-seven days previously, which coincided with a visit from Susa Camile, matched the trend variation of airborne particulate collection from coffee in a storage room from ninety-four days ago. The statistical likelihood of this happening was small, but not insignificant. However the same variance occurred over a hundred times in the last year over a vast array of different readings and data sets. That was statistically unlikely. The intellect comparing the data sets placed the chances outside of the possibility of it occurring naturally..

This information meant that something was similar. Why did the visit from Camile link in statistical variation of numbers, the spread of variables, To a coffee particulate count in a locker room? Hooper had the computers analyse both sets and looked at the actual records. The coffee particulate count was higher in the storage area than expected, yet there was only one officer listed as being on duty and they drank chocolate. Hooper verified there were particulates of chocolate recorded by air filters and there were, they fell well within statistical averages.

The shuttle journey was more interesting. The trend there was negative. The weight on the return journey, accounting for all items taken from the shuttle and items gained, indicated that the shuttle had a lower mass to power usage ratio. The statistical variance had the same profile as the coffee particulates. It was the same sequence of possible change. 

If one were suspicious one might imagine, reasoned Hooper, that Camile brought an item with her and left it at the station. The mass difference which would have affected the sensor data was overwritten to the highest level of variance to mask the fuel cell inaccuracies on the return journey. So something was brought to the station. Removed, and hidden, from manifests and its missing weight accounted for in the changing of the sensor data. Hooper started a very thorough check on the data of the inbound journey. It was also likely to be masked, just with a set of variances that hadn’t yet been detected.

The coffee particles then, what were they? It would suggest, since the readings said there was only one chocolate drinking person on duty and using the terminal at the time, that this wasn’t true. Again Hooper started a thorough check of all data around the event. Say someone could mask their arrival at that location, or at least mask how long they stayed. Say they drank coffee while the organic on duty drank something else. Say they brought the drinks with them, or stayed for a drink while there.

It was two data points. But lots of sets of data was collected at those two points. Hooper had to assume they had all been masked. Altered and changed using the very complex deception the evidence inferred. Hooper pondered that if they assumed the information is suspect, and there was a pattern, then now they could use those many streams of data. There were very few suspect data collections, compared to the vast numbers they had collected. Hooper decided that the best notion was to run an analysis on the restricted sets, the suspect data, and the whole data. Use the known problem to look for other patterns and try to retroactively determine if the algorithm was applied to the data all the time or just at certain points. How was it being used? 

Hooper opened a very secure link and set the expert, who did not work for judiciary, to work on the reverse engineering of a mathematical conundrum.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.