In order to trust the results from the new Senate data-capture system, you have to trust the AEC has correctly implemented the count without making any mistakes and without anyone tampering with the data — which are both probably unlikely events, but possible.
So what is different for the 2016 election that did not happen in 2013? The legislation for Senate voting changed (remember they stayed up all night at Parliament House to get the new legislation through and said nasty things to each other?). The outcome, however, was good: they got rid of the group ticket system, which caused many problems — too many for me to deal with in this article.
[Mayne: govt powers ahead with Senate voting reform]
The down side is the AEC now has to capture every preference mark on every Senate ballot paper individually (they only did this for 3% of ballots before), which is no mean feat. The AEC came up with a good idea, which was to partner with Fuji Xerox and implement a scan system that would scan and enter the data for every ballot paper. A full description of the system can be found on the AEC’s website.
The new Senate data-capture system is very impressive and probably will capture preference markings on ballots with a high level of accuracy. How high, you say? This would not be hard to find out, as all you have to do is take a sample of the input (ballot papers held in storage in well labelled boxes) and compare to the output (data in a file on the AEC’s virtual tally room website). The task to do this comparison is simple but tedious. It involves getting two people to cross call data from the ballot to be checked against the file. Any time there was a difference, the Divisional Returning Officer would need to adjudicate. Ideally, this process would be done in the presence of scrutineers.
You would think that given this is such a simple test the AEC would have done this. But no — they will not do this check because it is not required by the legislation.
Oh well, maybe we can rely on the other testing done by the AEC and their detailed operating procedures. The AEC identified on their website that the following testing was done:
- Independent third-party quality assurance test by IBM;
- Certification by the National Association of Testing Authorities for the count program; and
- Penetration testing by an independent, accredited Australian Information Security Registered Assessors (IRAP) Program auditor for the systems and networks.
So surely all we need to do now is ask, and they will provide these reports for our review, right? No again, was the official line from Doug Orr, the NSW state manager and returning officer for the NSW Senate count. He advised a scrutineer who requested completion reports for the above testing:
“The purpose of scrutiny, role of scrutineers and the process to be followed is set out in section 265 and 273 of the Commonwealth Electoral Act (CEA) and the AEC produces a handbook to assist scrutineers which may be found at https://www.aec.gov.au/Elections/candidates/scrutineers.htm. Matters which fall outside of the relevant section of the CEA, such as your request for review of documents, are not considered part of scrutiny and are therefore not available.”
So is this a reasonable approach to take and does it achieve the high standards the AEC has set itself in its current “2016 Federal Election Service Plan”? The plan has the following in its service standards:
“Standard 4. The public and stakeholders have confidence that the electoral process is well managed …
“The AEC is committed to delivering processes that uphold electoral integrity and engender voter and stakeholder trust in the result and to ensuring the security and sanctity of the ballot paper at all times.” (Italics mine.)
The question is: does the current scrutiny-related legislation prevent the AEC upholding its own service plan standards when complex computer systems are being used? Wouldn’t voter and stakeholder trust be improved by the AEC providing test reports and performing a simple cross-check of the input ballots to the output data file? If the answer is “yes” then legislation needs to change.
The following two articles from UNSW and Melbourne University both argue we need changes to the current processes. The scrutiny of election technology is quite different to that used for the scrutiny of the current paper-based election processes.
Firstly, it is self-evident that the effective scrutiny of electronic election processes requires some knowledge of the underlying technology.
Secondly, scrutiny of these systems cannot be achieved by simply viewing ballot papers being entered or reports being printed because that does not prove the output is right.
Thirdly, it is not possible to fully test any complex computer system so the scrutiny process must include sufficient level of reconciliation of outputs to knowing inputs to ensure process integrity and accuracy is sufficient to elect a candidate.
One solution is the use of specialist boards to deal with election technology scrutiny. This approach has been implemented in other jurisdictions, such as Norway where they implemented an Internet Election Committee (IEC) for their internet voting election trials in 2013. This committee had oversight of the trials with a particular focus on technology security. More information about the committee’s work can be found in The Carter Centre’s report. Also it should be noted that Canada has a completely independent body to oversee elections. Aspects of this entity’s structure and function may also be applicable in the Australian environment to address the increased complexity of the electoral processes.
Most technologists who have reviewed the Senate scan process recognise the current scrutiny practices are deficient when using technology to handle votes. I would therefore recommend the government consider changes to the Commonwealth Electoral Act regarding scrutiny to require observation of computer-based vote-handling systems to be undertaken by independent experts in conjunction with the current scrutiny processes. Anyone interested in supporting these changes should consider making a submission to the federal electoral matters committee when it sits after Parliament resumes.
One approach is to allow open or at least impartial inspection of the code implementing the count (Open Source). Since there are 3 computerised data handling stages this would only be a partial improvement on the current black box result. A better way would be to implement an Open Data approach, where enough data is published at each stage to allow any interested party to write their own implementation and check the results.
Stage 1 Data Entry: physical bundles of votes become files of preference allocations. This currently happens under scrutiny and is checkable.
Stage 2 Data Aggregation: The files (bundles) of votes get sorted into an overall occurrence count for each preference pattern. If the files of votes AND the occurrence counts are published, this stage can be checked too.
Stage 3 The Count: From the occurrence count data independent implementations of the count can be run. Each stage of the count is currently published to the results can be tallies exposing any bugs in the result. This is the best way to ensure renewed confidence in the result.