Security & compliance

Our users trust us to keep their data safe and secure, a responsibility we take seriously. If you have any questions or concerns about this, please get in touch.

Vulnerability Disclosure

If you would like to report a vulnerability or security concern regarding any reDock product, please contact security@redock.com.

We will verify the report and take corrective action as soon as possible, then notify our users and the relevant authorities of the issue.

Compliance

SOC 2 Type 2

reDock is a SOC 2 Type 2 certified provider. Our SOC 2 certification covers the Security principle in the Trust Services Criteria. This assesses our systems and the access control designs we have in place to prevent unauthorized access to customer data.

General Data Protection Regulation (GDPR)

reDock is fully GDPR-compliant, and we handle our customers' personal data with great care and respect, as outlined in our terms of service, privacy policy, and throughout this document. We use industry best practices for security and privacy, and have vetted all third-party processors we employ for compliance as well. Data controlled by our customers and provided via our web client and connectors is ultimately our customers' responsibility under the GDPR, but we provide tools such as data retrieval via our web client, tools for permanent data deletion, as well as strict security practices, which allows our customers to remain compliant as well.

PCI DSS

All credit card and payment information is handled by our payment processors, Zoho and Stripe both of whom are PCI compliant.

Infrastructure

Google Cloud Platform, which hosts reDock, undergoes regular independent audits for a range of standards including ISO 27001, ISO 27017, ISO 27018, SOC 2, SOC 3, CSA STAR, EU-U.S. Privacy Shield, HIPAA, and PCI DSS.

Infrastructure Security

reDock is hosted on Google Cloud Platform, which employs some of the best security practices in the industry. This is described in the Google security whitepaper and Google infrastructure security design overview, and includes:

reDock employees do not have physical access to data centers, nor access to the underlying Google infrastructure.

Application Security

Authentication and Access Control

Users log in to their reDock accounts using external authentication providers (currently Microsoft Azure Active Directory) via an OAuth 2 flow, optionally with two-factor authentication, which we strongly recommend. The user's password is never transmitted to us, and we do not gain access to any external resources that belong to the account.

The client gains a reDock access token, which is transmitted either as a cookie or a HTTP header, providing access to our HTTP API. The token automatically expires when not used for some time. This token is converted to a short-lived cryptographically signed JWT token when traversing our frontend infrastructure, which is used to authenticate and authorize all internal RPC calls.

reDock datasets can be configured with either public or private read access by default, and individual authenticated users can be assigned various roles giving them read or write access as required.

Encryption

All access to reDock resources by end users is encrypted in transit with HTTPS transport layer security (TLS). Support for the older SSLv2, SSLv3, TLS 1.0 and TLS 1.1 protocols is disabled, as are several older cipher suites, since these have known security vulnerabilities. Internally, data is encrypted in transit and at rest as outlined under Infrastructure security.

Data Retention and Removal

We record a complete history for documents submitted via our API, frontend, and connectors. Uploaded documents, including their history, can be deleted via our API, and all such deletions occur within a few minutes.

After removal, data will still be retained in our backups for some time, to allow for recovery in the case of accidental or malicious removal.

Application Development Lifecycle

We use continuous delivery to enable rapid and systematic development, testing, and deployment of our product, with automated error reporting and monitoring to alert us of problems. This ensures a quick and effective response to potential bugs and security issues, and reduces the risk of human error.

We use a secure software development lifecycle (SDL) methodology to provide guidance and direction on a set of practices that support security assurance and compliance requirements. The SDL helps our developers build more secure software by reducing the number and severity of vulnerabilities in software, while reducing development cost.

Application Security

Encryption

All data is encrypted in transit and at rest as outlined in Infrastructure security.

Access Control

Employees access central resources using two-factor authentication via Azure Active Directory, and only have access to the systems required for their role. All remote access is encrypted, either via HTTPS transport level security or via VPN connections. Employees will never directly access customer-controlled data unless required for support reasons, and will always ask the customer for permission first.

Internal services are isolated from the Internet to the extent possible, and only have access to the specific resources they need, with the minimum necessary privilege level, using a combination of service-specific cryptographically signed access tokens or passwords and network-level firewall rules. Access tokens are stored encrypted in our Kubernetes orchestration platform or are stored in Google Secret Manager, only available via authenticated and encrypted RPC calls from the Kubelet node agents, and provided to specific applications in isolated Linux cgroups namespaces without ever hitting disk.

Data Retention and Removal

All data is removed or anonymized as soon as possible after deletion or service cancellation, with a short grace period and backup retention as outlined in our terms of service to allow for recovery in the case of accidental or malicious removal. Users can also contact us to have their data removed. Storage devices are securely decommissioned after use as outlined in Infrastructure security.

Security Audits and Software Upgrades

We perform regular internal security audits and software upgrades every three months to ensure our systems are secure and reliable, and take immediate measures whenever significant security vulnerabilities are discovered.

Credit Cards and Payments

Credit cards and payments are processed by our payment providers, Zoho and Stripe. reDock never receives credit card information, nor do we have access to it, and it is removed from Zoho and Stripe as soon as the customer updates their card information or closes their account.

Geographic Location

All customer-controlled data provided via our API is stored permanently within Canada or the United States, depending on client configuration. Data which we control, such as our user database and email processing, may be stored in the U.S. with third-party processors employed by us in order to deliver the service - see below for more information.

Third-Party Processors

Customer-controlled data provided via our web client is only stored in Google Cloud Platform, and never shared with any other third parties. Other customer data for which we are a controller, such as our user database, email processing, error reporting, and so on, may be sent to certain third-party processors which we employ to deliver our services, as detailed in our terms of service. We have vetted the security and compliance of all such processors, and all transfers are performed securely and in line with best practices. We never share any customer data, personal or otherwise, with third parties unless employed by us under contract as data processors.

Business Continuity

High Availability

reDock is built using fully redundant and distributed systems, running across multiple data centers, and can withstand the loss of a single component without significant service disruptions. Components are regularly taken out of service during routine maintenance, without affecting availability, and Google Cloud Platform's live migration technology transparently migrates virtual machines to other hosts prior to infrastructure maintenance.

Incoming traffic is anycast-routed to Google's globally distributed load balancers, which pass it on to the nearest available data center, automatically routing around outages. The load balancers and CDN can absorb many types of DDoS attacks (distributed denial of service), and many of our backend systems will automatically scale to handle increased load.

Loss of a Google Cloud data center will cause some downtime, however such events are extremely rare. Data centers have primary and alternate power sources, as well as diesel engine backup generators, each of which can provide enough electrical power to run the data center at full capacity. Data centers also have automated fire detection and suppression equipment.

Backups

Our databases are backed up daily to remote storage in multiple regions. Files and assets are replicated across multiple regions as well, with 7 day backups of historical versions.

Disaster Recovery

Our systems currently run on two data centers: North America North East 1 (Montreal) for clients with Canadian data residency, and US East 4 (North Virginia) for clients with US data residency. In the highly unlikely event of a region-wide outage or similar disaster, we can fully recover to a different region with minimal data loss within 12-24 hours.

Corporate Security

Employees

All employees are required to have a background check, to sign confidentiality agreements, and are only given access to the systems they need for their role. Employee mobile computers are secured with encrypted hard drives, firmware passwords, and firewalls, and access to central resources and third-party services are always encrypted and protected with VPNs, two-factor authentication, using a combination of passwords, time-based one time passwords on dedicated devices, and cryptographic private keys.

Disclosure Policy

If a security issue or data leak is discovered, we will notify the affected users and relevant authorities as soon as possible, in line with current regulations and industry best practice. We also publish live reports of operational issues on our status page, which supports email notifications as well.