Our clients frequently ask us, “Is my data safe?” Here’s an overview of our security practices, and how we keep your data safe, accessible and available.
This summary is fairly extensive, and links to even more articles for detailed explanations. If you’re just looking for something specific, maybe one of these items is for you:
- Terms of Service
- Data Processing Agreement
- CAIQ Lite security questionnaire
- Latest Pen-test result
- Data Storage and Retention Policy, Data Replication and Backup policy
- Incident response plan
- Internal Mobile Device Policy, Password Policy, Malware/Virus Policy
Found a security threat?
If you feel you have found a security vulnerability, learned about a new threat model, or want to report a security incident, please contact us immediately. We will keep all your data confidential. You can send us an email at email@example.com, or call us any time at +49 157 3432 5347. We will deal with your reported issue immediately.
Our software runs in the Google Cloud, using the App Engine platform. This is a Platform-as-a-Service, which enables application developers to focus on creating their application, while Google takes care of provisioning and configuring servers, firewalls and routers, providing the database, running automated backups, logging, auditing, physical access security, and so on.
App Engine uses Java 8, and Jetty as the application server. All static files are automatically hosted by the Google Content Delivery Network.
We don’t operate any of these servers “directly”, all of the servers are managed by the Google Cloud service, so we don’t need to worry about proper configurations, security patches etc. This is all taken care of automatically.
Google is constantly auditing its services, and has approved to be:
- SSAE16 / ISAE 3402 Type II:
- SOC 2
- SOC 3
- ISO 27001, 27017, 27018
- PCI DSS v3.1
compliant. Read more here or download the compliance reports over here.
In addition to the Google Cloud, we use a handful of 3rd-party products to provide our services, you can learn more about them on our subprocessors page.
We don’t operate any servers on our premises, and we don’t have a local network, routers, firewalls (but of course our subprocessors do). We explicitly don’t download any customer data onto local workstations, except for the very rare case of troubleshooting a problem, but in that case the data needs to be encrypted at rest, and deleted immediately afterwards. You can learn more on our mobile devices policy page.
Passwords and cookies
Passwords: Your passwords are never stored in plain text, instead they are encrypted using the BCrypt algorithm. In layman’s terms, your password is garbled beyond recognition and then saved. It cannot be decrypted without huge effort. This also means we cannot recover passwords for you. You need to reset them since decryption is not possible.
If you don’t want to use our system to store passwords at all, you’re free to integrate with Okta, OneLogin or Google Workspace via SSO as well. In that case, the passwords will be managed in their systems (or in your own LDAP or AD if you configure those systems accordingly). Please see our Single Sign-On overview for more information.
Cookies: Cookies may be used to authenticate, but the cookie-based authentication does not store your actual password in the cookie. All that gets saved is a randomly created token that allows you to log in and access basic functionality. But if you want to access security-relevant settings like password or email settings, or the administration features, you’ll still be prompted to provide your actual password.
Remember Me: If security is paramount, you can switch off the Remember Me functionality entirely for your company account, you’ll find that option in the advanced settings dialog. Read more about our cookie-based security policies here.
2-Step Verification: We support 2-Step Verification using Authy, which supports SMS tokens and a mobile app for token generation. 2-Step Verification can be enabled on a per-user basis for key users, and enforced for all staff. This will force a user to verify each new device they want to connect to Small Improvements. Even if a password has been compromised (e.g. stolen from another service at which a user used the same password) an attacker would still not be able to log in into Small Improvements.
All data is encrypted during transit using HTTPS/SSL. All data is also encrypted by default in the Google Data centers, by Google. In addition, we encrypt string-based content such as the written feedback, objectives, performance reviews in the database on a per-field basis, using symmetric AES-256 encryption, making it even harder to analyze the data in case of a database breach.
The encryption/decryption process happens on the server, at the so-called service level, before and after accessing the database. Sometimes we get asked why it doesn’t happen on the client already. In the case of symmetric encryption, it doesn’t help to encrypt on the client, because other clients need to decrypt the data as well, and so the decryption key would have to get distributed to the clients too. This would actually be less secure.
In addition, you (as the admin) can enforce further security mechanisms:
Password length enforcements: By default, we’re enforcing passwords to be at least 8 characters. In addition, we’re giving users an indication of how secure their password is. 8 characters is pretty short, so you can define a company-wide policy of minimum password length, for instance forcing passwords to be at least 10 characters long.
The setting can be found in the User Settings portion of the app by visiting Administration -> Overview – > User Settings.
IP Range restrictions: If you’d like to restrict access to your company account to a certain IP range (e.g. your office plus anyone who can log in via your VPN) then you can restrict those IP ranges as well. While some people argue that IP addresses can be tampered with, it’s not trivial to achieve this for more than a single request (the response from the server will get sent to the IP address the attacker pretends to be at). So IP range filtering adds to security as well.
Preventing unauthorized access from within
Our security model anticipates that even a member of the same company might be an adversary – for instance an unhappy coworker. So in addition to our large amounts of automated tests that prevent outside access, we place just as much effort onto preventing one user from accessing other staff’s information. Thousands of tests are run day and night to ensure that we don’t accidentally introduce some error that exposes a user’s review to another user.
Some screens (like the “data reset” screen and the “data export” screen) even require admins to get in touch with Small Improvements Support, so that a disgruntled administrator cannot simply export or erase your data.
Preventing social engineering and attacks against SI staff computers
Many attacks these days are not targeting the server, but work by tricking staff into downloading and running infected software, or visiting sites that have been compromised and which install malware onto visitors’ computers.
There are several ways how we reduce risks:
We’re a small team, so it’s impossible for someone to pretend they are “someone important” from another business division. A common social engineering technique is to place a call like this: “Hi, this is Joe from the IT department. We’re seeing unusual activity on your computer, can you please visit the link I just sent you by mail, to install the latest anti-virus software”. We’re way too small for this to happen.
We’re security-aware and regularly train all staff to be cautious, and to be especially sceptical about emails, but also about other internal communication channels. If an attacker impersonated the CEO and sent an unrequested (and infected) file by email, the recipient would ask for confirmation, but also suscpicious requests via Slack will be confirmed by a phone call.
- We’re using only the most up to date browsers, and we’re keeping our operating systems up to date as well. We mainly use MacOS and Linux, which are are substantially harder to hack than Windows.
- We use different passwords for every service we use, so that one compromised service won’t allow an adversary to access a user’s other accounts.
- 2-Factor-Authentication is mandated for all systems that support it
- Our workstations hard-drives are encrypted, so even in the case of theft, an attacker wouldn’t be able to reverse-engineer our sourcecode and upload a compromised version of Small Improvements either.
For more detail check out our Access Control Policy, our Malware&Virus Policy and our Password Policy. There are lots of other mechanisms which we won’t discuss on our website. We’re not claiming to have super-human abilities, but security is our biggest concern since we’d be out of business if someone managed to hack us. So there are plenty of other items we’re considering when developing our features, training our staff, and deploying new versions.
Full disclosure policy: In the case, anything should ever happen, we will disclose the incident to minimize damage. See also our incidence response plan.
Access restrictions to our database
Our database is hosted inside Google Data centers, and therefore in some of the physically most secure places possible.
Non-physical access to our production database is severely limited too. Only select lead developers can upload new software releases or view the actual raw database, and access is restricted by two-factor authentication.
Access to the administration backend is equally restricted. Only the lead developers can access all detail like reviews or feedback, while SI support staff can only view general information about a client (like how many users, who logged in and when, what review cycles exist, etc).
It is our policy to not look into customer data unless permission has been granted by the customer to help to troubleshoot a bug. Most bugs we encounter can be fixed by reproducing the situation locally, or by analyzing the server logfiles and stacktraces.
Preventive measures are just one side of the coin. It’s crucial to have third parties double-check the security model too. We do this at two levels: Ongoing tests and dedicated security audits.
As an ongoing measure, we use a service called HackerOne that connects white-hat hackers with software vendors. We’re running a bounty program that encourages hackers to break Small Improvements’ security model, and we pay rewards in case someone finds issues.
As an additional measure, we’re using external pentesting companies on an annual basis. The most recent test was conducted by cure53 in January 2023, using a white-box approach. The executive report can be found here, and the detailed report can be requested by mailing us.
We find the combination of both approaches together has given us the best of two worlds, going both broad and deep, finding a very diverse set of security issues. No issue so far has been on the scale of “very severe”, but we have definitely found and resolved several important vulnerabilities.
Google Cloud is hosted on a highly distributed network across Google Datacenters. The data is constantly replicated as well, so even if an entire data center goes down, others still have all the data, and continue serving requests without any end-user noticing.
We create full backups twice a day as well. So in the event of catastrophic failure of all data centers, or in the case of a grave programming mistake that accidentally wipes data from within our application, we can resort to the backups. We store these backups on an entirely independent service of the Google network.
We have never had to use these backups in the many years of our product’s existence, but we frequently ensure the backups actually work, by restoring them onto a separate server and ensuring the data is all there. In fact, the backup data is used to populate our data warehouse, so major data loss, or incomplete backups, would get noticed very quickly.
We keep backups for 6 months, after that they get deleted on a rolling basis.
Additional information can be found in our backup policy document.
Security is the most important aspect when choosing a cloud platform. But there are other related topics that deserve a mention. An application needs to be more than secure, it needs to be available and functional as well, you need to get support for issues that arise, and so on.
We picked Google Cloud for the very reason that it’s optimized for availability. An application needs to meet quite a few criteria so it can run on App Engine, and this is mainly because downtime is unacceptable, and certain coding practices are just not possible on App Engine. In return, your application is always available under normal conditions.
Our entire business model is geared to providing the best user experience possible. Availability is a key ingredient. We’re typically achieving 99.95% according to our Pingdom tracker (see details here).
We will do whatever is possible to keep it this way, and fix any downtime with the highest priority, in the middle of the night if necessary. Our own releases do not require downtime. We typically roll out small upgrades continuously a few times each week, and only on very rare occasions we have to schedule an update on a weekend. But even weekend updates don’t require downtime.
Downtime can not be ruled out entirely, but if for some reason we’re offline for more than a few minutes, we’ll write a post-mortem and reimburse customers who were affected, for instance by providing a free month of service.
We typically respond within 24 hours to “normal” questions that don’t seem urgent to us. We try to reply within 2 hours to urgent questions, e.g. if an administrator is stuck and doesn’t know how to proceed with performance reviews that are due within a day or two.
Our company is based in Berlin, Germany, but our support team is distributed across Berlin and the US, so we cover Europe and the US during all business hours. APAC support is limited to late evenings and early mornings local time.
Our normal support hours are Monday through Friday 8am – 6pm CST
Check our contact page for our phone and email addresses.
Overall product quality
Even if a system is up and running, program errors (“bugs”) may occur that prevent certain features from working. We take this just as seriously and are doing whatever we can to ensure the highest quality standards.
First of all, we have a very strict hiring process. Every applicant is vetted through at least three interviews with different people, and has to conduct a “homework task” to prove that they are good at what they claim. In the case of developers (who will have most access levels eventually) we include a two-day coding task as well. Only those who pass our very high standards will join us, and we take onboarding and initial task selection just as seriously to ensure new staff don’t introduce preventable problems. Every staff member only receives access permissions on the “least privilege principle”, and we take our time until new staff receive admin permission on any system really.
We place a lot of emphasis on automated testing. Each important piece of our software is double-checked by a complementing piece of software, that ensures the original code works well under expected and unexpected conditions. We are using unit- and integration-tests both on the server and on the client-side, running thousands of automated tests continuously, thus preventing failures and errors every day.
Every feature we write will be code reviewed by another person, ideally from another team. Not every single line needs to be verified, but the feature and the bugfix as a whole will be revisited to ensure no unexpected side effects (and also to ensure we follow similar coding practices)
Once all automated tests pass and all code reviews have finished, we either deploy new features to our QA system or, if it’s a smaller change only, directly to a staging system where we can test the change within our own live system. Only once we’ve tested diligently, we either make the feature available to clients as an “opt-in” beta, or promote it into production, and monitor it for a while. If anything goes wrong, we can roll back to the previous release within a minute.
In addition, larger features are typically subjected to a lot of user-testing, and we keep improving features even after they have been shipped. We monitor the logfiles carefully, and every exception is automatically sent by email to 2 lead developers, keeping them on their toes as well. We don’t claim our software has no bugs, but we’re extra sensitive to any issues that may occur, and bug fixing always gets a higher priority than feature development.
Google Cloud comes with a sophisticated admin console that includes monitoring and a logfile viewing system. This allows us to pinpoint issues within minutes or even seconds, even when there are dozens of parallel requests. We regularly scan the logfiles for unexpected errors too and get in touch with end-users if anything went wrong, notifying them about the problem and about our plans on how to fix them. The system automatically sends an email to the development team if a bug occurred, just to make sure no problem goes unnoticed.
We also use Pingdom to monitor latency and downtime from various locations across the globe. If you’d like to see our Pingdom statistics, we can give you access.
We keep an ever-growing internal audit log which helps SI staff get a good overview of what is happening in a client’s system. The audit log doesn’t contain confidential information, but we track all events like logins, logouts, edits to content, assignment of permissions, emails sent, etc, and it includes data like IP address, browser version, and so on.
The audit log is currently very technical and low-level and therefore not easy to read, so it’s not accessible to SI customers by default. But we’re happy to provide an Excel export on a case-by-case basis if you need it. Just contact us.
Data portability, deleting your data
You are welcome to create your own exports.
This option needs to be enabled by SI staff. You can download an XML file that contains all your company data, or you can download data per review cycle. You could for instance download just the XML file for all performance reviews done in the Review Cycle 2018. The XML file can be used to populate another system if you decide to leave our service.
You will find the XML-download button for the cycle in the advanced menu on a cycle overview screen, the XML download for the entire system in the general settings -> Advanced tab. Since the XML export contains all data, the download button is off by default. Please contact SI support to enable the download button.
CSV, Excel, PDF
We also provide means of exporting data to CSV format, so you can further process it. This is available currently for performance review core data, 360 feedback, for objectives, and for your user database.
Please see our full guide for exporting account data.
You may always decide to permanently delete your data. There’s a button on the Advanced Settings page (Administration -> Overview -> Advanced Settings) that lets you wipe all content. If you’ve been using our service for more than 4 weeks, this feature is however protected by an additional master password.
You will only get this password if one of your administrators asks for it by email. We will check if we’ve been in touch you before. If more than one administrator exists in the system, we may ask the other person for confirmation. We put this extra step in to prevent snap decisions. After all, the data would really be gone, and our backups do not allow for selective recovery on a per-company basis.
Once deleted from the live database, the data is still available within our backups for another 6 months, then it also expires there.
The easiest way to report a bug is to send a mail to firstname.lastname@example.org
Do you need further information?
We are happy to answer more specific questions if you have any, and we’re happy to extend this document too. Please get in touch.