Nism Unit 1

5 minute read

Published:

Unit Reading

The majority of insights I gained were mainly from the “Practical Information Security Management” book, written by Tony Campbell.

One interesting thing I learned from the unit reading is that in the workplace, individuals may use anonymization services (such as Tor), to exfiltrate sensitive data. After some thought, I wonder if a VPN could be used to achieve the same outcome.

Another interesting idea I found during the unit reading is that security has become significantly more important because of how severely the bar to entry has been lowered. As the author points out, it’s now possible to rent malicious software and run it with no technical knowledge required, and for more complex scenarios, you could hire professional hacking services, meaning that all that is needed to initiate an attack is motivation and money. The “malicious software as a service” business model has improved conditions for malware developers as they now have steady income streams, incentivising them to improve on what they have and expand their offerings.

Based on the above, I argue that there’s an asymmetry to security- a single mistake is costly to a company, but a failed attack may not be costly for an attacker. To illustrate this point, only one person needs to open a malicious attachment to install ransomware and cause it to spread through their company’s network. Companies would need to invest time and money into better email filters, training, and so on, whereas an attacker only needs to acquire ransomware through some channel and continuously send spam emails containing the ransomware. Following this, I would say that implementing security in an organisation needs to be an active discipline because attacks become easier to initiate over time, especially in today’s world. To make security more active, behaviors and processes need to be encouraged, not just mindfulness. One idea I have in mind is that organisations should be regularly sending emails that appear the same as malicious emails, but contain no virulent payload. If an employee opens the attachment, they should be notified that the email was dangerous in nature, and the warning signs can be explained to the employee. This would be better than just sending a list of tips to follow when looking at an email, because tips do not cause employees to engage with content, and consequently they do not learn from it. I would like to research what current practice is in industry- do any companies do this?

Forum Discussion

I liked the forum discussion because it gave me the opportunity to see how different attack surfaces can be in different computing fields. In the previous module, I worked on developing a secure frontend website as part of a group project, and in my career, I have mainly focused on security in software in limited contexts (such as authentication). The article given for reading challenged my thinking because it showed me how different fields in computing have different limitations due to the technologies used, which, in turn, influences how security must be enforced. I was initially surprised by how fast the brute-force attack was, because it’s not something that would be expected in an age where secure passwords with high entropy are the norm, since websites force the use of passwords with those attributes.

However, when one begins to think of the constraints on medical devices, it would make sense that these situations may occur. Medical devices would be used by people of varying technical skills, thus security needs to be balanced with ease-of-use. Complex password rules may improve security, but could make the device more frustrating to use (e.g., alphanumeric passwords may need to be retyped multiple times due to certain characters looking similar to each other). Likewise, security could be improved in a more automated fashion through techniques such as encryption, however, the usage of encryption algorithms would require more power consumption and increase latency, potentially impacting the accuracy of the data which the device reports.

This was an interesting topic to consider, and what I learned from this reflection is that it’s important to first understand the factors that limit how effectively security can be implemented in a field, before making a recommendation. In my forum post, I briefly mentioned how energy-efficient encryption algorithms can still be regarded as less secure than protocols such as AES, with more research being required before they are industry-ready. My suggestion of sticking to AES at the cost of power consumption was built on this line of reasoning: knowing that promising results are on their way, means that immediate solutions should be compatible with potential upgrades and augmentations. From this perspective, software-based encryption is a better option, as it means that the opportunity for using a low-power, low-latency, hardware-based encryption scheme would still be available once the technology is ready for industry.

Based on the above, considering future trends may be useful as part of a decisionmaking strategy. In cases where some future technology would solve a current problem perfectly, it would make sense to build the compatibility for that technology now, to save time in the long term. I aim to build the habit of keeping up with trends in computing, ideally through reading the MIT Technology Review or journals with a high impact factor.

Link to team contract
Initial Forum Post