Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This directory documents some features that we consider "alpha" - available for use, but without any guarantee of long-term support or stability.
HashiCorp Vault is a popular secret management solution. We've attempted to implement using it as an alternative to secret management solutions provided by AWS/GCP.
We've done a PoC for logging from psoxy back to New Relic for monitoring. Usage explanation below:
Given nature of use-case, proxy does A LOT of network transit:
client to proxy (request)
proxy to data source (request)
data source to proxy (response)
proxy to client (response)
So this drives cost in several ways:
larger network payloads increases proxy running time, which is billable
network volume itself is billable in some host platforms
indirectly, clients are waiting for proxy to respond, so that's an indirect cost (paid on client-side)
Generally, proxy is transferring JSON data, which is highly compressible. Using gzip
likely to reduce network volume by 50-80%. So we want to make sure we do this everywhere.
As of Aug 2023, we're not bothering with compressing requests, as expected to be small (eg, current proxy use-cases don't involve large PUT
/ POST
operations).
Compression must be managed at the application layer (eg, in our proxy code).
This is done in co.worklytics.psoxy.Handler
, which uses ResponseCompressionHandler
to detect request for compressed response, and then compress the response.
API Gateway is no longer used by our default terraform examples. But compression can be enabled at the gateway level (rather than relying on function url implementation, or in addition to).
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-gzip-compression-decompression.html
GCP Cloud Functions will handle compression themselves IF the request meets various conditions.
There is no explicit, Cloud Function-specific documentation about this, but it seems that the behavior for App Engine applies:
https://cloud.google.com/appengine/docs/legacy/standard/go111/how-requests-are-handled#:~:text=For%20responses%20that%20are%20returned,HTML%2C%20CSS%2C%20or%20JavaScript.
All requests should be built using GzipedContentHttpRequestInitializer
, which should add:
Accept-Encoding: gzip
append (gzip)
to User-Agent
header
We believe this will trigger compression for most sources (the User-Agent thing being practice that Google seems to want).
For core
, seems need to use explicit processor path (which IntelliJ fills for you), rather than classpath. And output generated code to Module Content Root, not Module Output directory.
The problem we're trying to solve is that various features, such as VPCs/etc, are relevant to a small set of users. It would complicate the usual cases to enable them for all cases. So we need to provide easy support for extending the examples/modules to support them in the extreme cases.
Composition is the canonical terraform approach.
3 approaches:
composition, which is canonical terraform a. commented out
validation
instructions to explain to customers are more complex b. conditional : validation will work, but hacky 0 indexes around in places
conditionals + variables pros:
simplest for customers
easiest to read/follow cons:
verbose interfaces
brittle stacks (changing variable requires changing many in hierarch)
From main
:
follow steps output by that tool
if need interim testing, create a "branch" of the release (eg, branch v0.4.16
instead of tag), and trigger gh workflow run ci-terraform-examples-release.yaml
On rc-
:
QA aws, gcp dev examples by running terraform apply
for each, and testing various connectors.
Scan a GCP container image for vulnerabilities:
Create PR to merge rc-
to main
.
After merged to main
:
With v0.4.47
, we're adding alpha support for AWS Secrets Manager. This feature is not yet fully documented or stable.
A couple notes:
some connectors, in particular Zoom/Jira, rotate tokens frequently so will generate a lot of versions of secrets. AFAIK, AWS will still bill you for just one secret, as only one should be staged as the 'current' version. But you should monitor this and review the particular terms and pricing model of your AWS contract.
our modules will create secrets ONLY in the region into which your proxy infra is being deployed, based on the value set in your terraform.tfvars
file.
Migration from Parameter Store: the default storage for secrets is as AWS Systems Manager Parameter Store SecureString parameters. If you have existing secrets in Parameter Store that aren't managed by terraform, you can copy them to a secure location to avoid needing to re-create them for every source.
If you forked the psoxy-example-aws
repo prior to v0.4.47
, you should copy a main.tf
and variables.tf
from that version of later of the repo and unify the version numbers with your own. (>=0.4.47)
Add the following to your terraform.tfvars
:
If you previously filled any secret values via AWS web console (such as API secrets you were directed to create in TODO 1
files for certain sources, you should copy those values now).
terraform apply
; review plan and confirm when ready
Fill values of secrets in Secrets Manager that you copied from Parameter Store. If you did not copy the values, see the TODO 1..
files for each connector to obtain new values.
Navigate to the AWS Secrets Manager console and find the secret you need to fill. If there's not an option to fill the value, click 'Retrieve secret value'; it should then prompt you with option to fill it.
IMPORTANT: Choose 'Plain Text' and remove the brackets ({}
) that AWS prefills the input with!
Then copy-paste the value EXACTLY as-is. Ensure no leading/trailing whitespace or newlines, and no encoding problems.
AWS Secret Manager secrets will be stored/accessed with same path as SSM parameters. Eg, at value of aws_ssm_param_root_path
, if any.
q: support distinct path for secrets? or generalize parameter naming?
For customers wishing to using New Relic to monitor their proxy instances, we have alpha support for this in AWS. We provide no guarantee as to how it works, nor as to whether its behavior will be maintained in the future.
To enable,
Set your proxy release to v0.4.39.alpha.new-relic.1
.
Add the following to your terraform.tfvars
to configure it:
(if you already have a defined general_environment_variables
variable, just add the NEW_RELIC_
variables to it)
If you want to make private (non-public) customization to Psoxy's source/terraform modules, you may wish to create a private fork of the repo. (if you intend to commit your changes, a public fork in GitHub should suffice)
See Duplicating a Repo, for guidance.
Specific commands for Psoxy repo are below:
As of Nov 10, 2022, Psoxy has added alpha support for using Hashicorp Vault as its secret store (rather than AWS Systems Manager Parameter Store or GCP Secret Manager). We're releasing this as an alpha feature, with potential for breaking changes to be introduced in any future release, including minor releases which should not break production-ready features.
NOTE: you will NOT be able to use the Terraform examples found in infra/examples
; you will have to adapt the modular forms of those found in infra/modular-examples
, swapping the host platform's secret manager for Vault.
Set the following environment variables in your instance:
VAULT_ADDR
- the address of your Vault instance, e.g. https://vault.example.com:8200
NOTE: must be accessible from AWS account / GCP project where you're deploying
VAULT_TOKEN
- choose the appropriate token type for your use case; we recommend you use a periodic token that can lookup and renew itself, with period of > 8 days. With such a setup, Psoxy will look up and renew this token as needed. Otherwise, it's your responsibility either renew it OR replace it by updating this environment variable before expiration.
VAULT_NAMESPACE
- optional, if you're using Vault Namespaces
PATH_TO_SHARED_CONFIG
- eg, secret/worklytics_deployment/PSOXY_SHARED/
PATH_TO_INSTANCE_CONFIG
- eg, secret/worklytics_deployment/PSOXY_GCAL/
Configure your secrets in Vault. Given the above, Psoxy will connect to Vault in lieu of the usual Secret storage solution for your cloud provider. It will expect config properties (secrets) organized as follows:
global secrets: ${PATH_TO_SHARED_CONFIG}${PROPERTY_NAME}
, eg with PATH_TO_SHARED_CONFIG
- eg, secret/worklytics_deployment/PSOXY_SHARED/
then:
secret/worklytics_deployment/PSOXY_SHARED/PSOXY_SALT
secret/worklytics_deployment/PSOXY_SHARED/PSOXY_ENCRYPTION_KEY
per-connector secrets:${PATH_TO_CONNECTOR_CONFIG}${PROPERTY_NAME}
eg with PATH_TO_INSTANCE_CONFIG
as secret/worklytics_deployment/PSOXY_GCAL/
:
secret/worklytics_deployment/PSOXY_GCAL/RULES
secret/worklytics_deployment/PSOXY_GCAL/ACCESS_TOKEN
secret/worklytics_deployment/PSOXY_GCAL/CLIENT_ID
secret/worklytics_deployment/PSOXY_GCAL/CLIENT_SECRET
Ensure ACL permits 'read' and, if necessary, write. Psoxy will need to be able to read secrets from Vault, and in some cases (eg, Oauth tokens subject to refresh) write. Additionally, if you're using a periodic token as recommended, the token must be authorized to lookup and renew itself.
Generally, follow Vault's guide: https://developer.hashicorp.com/vault/docs/auth/aws
We also have a Terraform module you can try to set-up Vault for use from Psoxy:
And another Terraform module to add Vault access for each psoxy instance:
Manually, steps are roughly:
Create IAM policy needed by Vault in your AWS account.
Create IAM User for Vault in your AWS account.
Enable aws
auth method in your Vault instance. Set access key + secret for the vault user created above.
Create a Vault policy to allow access to the necessary secrets in Vault.
Bind a Vault role with same name as your lambda function with lambda's AWS exec role (once for each lambda)
NOTE: pretty certain must be plain role arn, not assumed_role arn even though that's what vault sees
eg arn:aws:iam::{{YOUR_AWS_ACCOUNT_ID}}:role/PsoxyExec_psoxy-gcal
not arn:aws:sts::{{YOUR_AWS_ACCOUNT_ID}}:assumed_role/PsoxyExec_psoxy-gcal/psoxy-gcal
TODO