Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Availability: BETA
There are several connectors available for GitHub:
[GitHub Free/Pro/Teams] - for non-Enterprise GitHub organization hosted in github.com.
[GitHub Enterprise Cloud] - GitHub Enterprise instances hosted by github.com on behalf of your organization.
[GitHub Enterprise Server] - similar to 'Cloud', but you must customize rules and API host; contact Worklytics for assistance.
The connector uses a GitHub App to authenticate and access the data.
For Enterprise Server, you must generate a user access token.
For Cloud, including Free/Pro/Teams/Enterprise, you must provide an installation token for authentication.
Both share the same configuration and setup instructions except Administration permission for Audit Log events.
Follow the following steps:
Populate github_organization
variable in Terraform with the name of your GitHub organization.
From your organization, register a GitHub App with following permissions with Read Only:
Repository:
Contents: for reading commits and comments
Issues: for listing issues, comments, assignees, etc.
Metadata: for listing repositories and branches
Pull requests: for listing pull requests, reviews, comments and commits
Organization
Administration: (Only for GitHub Enterprise) for listing events from audit log
Members: for listing teams and their members
NOTES:
We assume that ALL the repositories are going to be list should be owned by the organization, not the users.
Apart from GitHub instructions please review the following:
"Homepage URL" can be anything, not required in this flow but required by GitHub.
Webhooks check can be disabled as this connector is not using them
Keep Expire user authorization tokens
enabled, as GitHub documentation recommends
Once is created please generate a new Private Key
.
It is required to convert the format of the certificate downloaded from PKCS#1 in previous step to PKCS#8. Please run following command:
NOTES:
If the certificate is not converted to PKCS#8 connector will NOT work. You might see in logs a Java error Invalid PKCS8 data.
if the format is not correct.
Command proposed has been successfully tested on Ubuntu; it may differ for other operating systems.
Install the application in your organization. Go to your organization settings and then in "Developer Settings". Then, click on "Edit" for your "Github App" and once you are in the app settings, click on "Install App" and click on the "Install" button. Accept the permissions to install it in your whole organization.
Once installed, the installationId
is required as it needs to be provided in the proxy as parameter for the connector in your Terraform module. You can go to your organization settings and click on Third Party Access
. Click on Configure
the application you have installed in previous step and you will find the installationId
at the URL of the browser:
Copy the value of installationId
and assign it to the github_installation_id
variable in Terraform. You will need to redeploy the proxy again if that value was not populated before.
NOTE:
If github_installation_id
is not set, authentication URL will not be properly formatted and you will see 401: Unauthorized when trying to get an access token.
If you see 404: Not found in logs please review the IP restriction policies that your organization might have; that could cause connections from psoxy AWS Lambda/GCP Cloud Functions be rejected.
Update the variables with values obtained in previous step:
PSOXY_GITHUB_CLIENT_ID
with App ID
value. NOTE: It should be App Id
value as we are going to use authentication through the App and not client_id.
PSOXY_GITHUB_PRIVATE_KEY
with content of the gh_pk_pkcs8.pem
from previous step. You could open the certificate with VS Code or any other editor and copy all the content as-is into this variable.
Once the certificate has been uploaded, please remove {YOUR DOWNLOADED CERTIFICATE FILE} and gh_pk_pkcs8.pem
from your computer or store it in a safe place.
We provide a helper script to set up the connector, which will guide you through the steps below and automate some of them. Alternatively, you can follow the steps below directly:
You have to populate:
github_enterprise_server_host
variable in Terraform with the hostname of your GitHub Enterprise Server (example: github.your-company.com
). This host should be accessible from the proxy instance function, as the connector will need to reach it.
github_organization
variable in Terraform with the name of your organization in GitHub Enterprise Server. You can put more than one, just split them in commas (example: org1,org2
).
From your organization, register a GitHub App with following permissions with Read Only:
Repository:
Contents: for reading commits and comments
Issues: for listing issues, comments, assignees, etc.
Metadata: for listing repositories and branches
Pull requests: for listing pull requests, reviews, comments and commits
Organization
Administration: for listing events from audit log
Members: for listing teams and their members
NOTES:
We assume that ALL the repositories are going to be listed should be owned by the organization, not the users.
Apart from GitHub instructions please review the following:
"Homepage URL" can be anything, not required in this flow but required by GitHub.
"Callback URL" can be anything, but we recommend something like http://localhost
as we will need it for the redirect as part of the authentication.
Webhooks check can be disabled as this connector is not using them
Keep Expire user authorization tokens
enabled, as GitHub documentation recommends
Once is created please generate a new Client Secret
.
Copy the Client ID
and copy in your browser following URL, replacing the CLIENT_ID
with the value you have just copied:
The browser will ask you to accept permissions and then it will redirect you with to the previous Callback URL
set as part of the application. The URL should look like this: https://localhost/?code=69d0f5bd0d82282b9a11
.
Copy the value of code
and run the following URL replacing in the placeholders the values of Client ID
and Client Secret
:
The response will be something like:
You will need to copy the value of the refresh_token
.
NOTES:
Code
can be used once, so if you need to repeat the process you will need to generate a new one.
Update the variables with values obtained in previous step:
psoxy_GITHUB_ENTERPRISE_SERVER_CLIENT_ID
with Client Id
value.
psoxy_GITHUB_ENTERPRISE_SERVER_CLIENT_SECRET
with Client Secret
value.
psoxy_GITHUB_ENTERPRISE_SERVER_REFRESH_TOKEN
with the refresh_token
.
These instructions have been derived from worklytics-connector-specs; refer to that for definitive information.
This section describe all the available Data Sources you can use with your Psoxy instance.
The Psoxy HRIS (human resource information system) connector is intended to sanitize data exported from an HRIS system which you intend to transfer to Worklytics. The expected format is a CSV file, as defined in the documentation for import data (obtain from Worklytics).
Example Data : |
Create a or a sufficiently for a sufficiently privileged user (who can see all the workspaces/teams/projects/tasks you wish to import to Worklytics via this connection).
Update the content of PSOXY_ASANA_ACCESS_TOKEN
variable or ACCESS_TOKEN
environment variable with the token value obtained in the previous step.
NOTE: derived from ; refer to that for definitive information.
The Dropbox Business connector through Psoxy requires a Dropbox Application created in Dropbox Console. The application does not require to be public, and it needs to have the following scopes to support all the operations for the connector:
files.metadata.read
: for file listing and revision
members.read
: member listing
events.read
: event listing
groups.read
: group listing
Go to https://www.dropbox.com/apps and Build an App
Then go https://www.dropbox.com/developers to enter in App Console
to configure your app
Now you are in the app, go to Permissions
and mark all the scopes described before. NOTE: Probably the UI will mark you more required permissions automatically (like account_info_read.) Just mark the ones described here and the UI will ask you to include any other required.
On settings, you could access to App key
and App secret
. You can create an access token here, but with limited expiration. We need to create a long-lived token, so edit the following URL with your App key
and paste it into the browser:
https://www.dropbox.com/oauth2/authorize?client_id=<APP_KEY>&token_access_type=offline&response_type=code
That will return an Authorization Code
that you have to paste. NOTE This Authorization Code
if for a one single use; if expired or used you will need to get it again pasting the URL in the browser.
Now, replace the values in following URL and run it from command line in your terminal. Replace Authorization Code
, App key
and App secret
in the placeholders:
curl https://api.dropbox.com/oauth2/token -d code=<AUTHORIZATION_CODE> -d grant_type=authorization_code -u <APP_KEY>:<APP_SECRET>
After running that command, if successful you will see a like this:
Finally set following variables in AWS System Manager parameters store / GCP Cloud Secrets (if default implementation):
PSOXY_dropbox_business_REFRESH_TOKEN
secret variable with value of refresh_token
received in previous response
PSOXY_dropbox_business_CLIENT_ID
with App key
value.
PSOXY_dropbox_business_CLIENT_SECRET
with App secret
value.
Example commands (*) that you can use to validate proxy behavior against the Google Workspace APIs. Follow the steps and change the values to match your configuration when needed.
You can use the -i
flag to impersonate the desired user identity option when running the testing tool. Example:
For AWS, change the role to assume with one with sufficient permissions to call the proxy (-r
flag). Example:
If any call appears to fail, repeat it using the -v
flag.
(*) All commands assume that you are at the root path of the Psoxy project.
Get the calendar event ID (accessor path in response .items[0].id
):
Get event information (replace calendar_event_id
with the corresponding value):
Get the group ID (accessor path in response .groups[0].id
):
Get group information (replace google_group_id
with the corresponding value):
Get the user ID (accessor path in response .users[0].id
):
Get user information (replace [google_user_id] with the corresponding value):
Thumbnail (expect have its contents redacted; replace [google_user_id] with the corresponding value):
API v2
API v3 (*)
(*) Notice that only the "version" part of the URL changes, and all subsequent calls should work for v2
and also v3
.
Get the file ID (accessor path in response .files[0].id
:
Get file details (replace [drive_file_id] with the corresponding value):
YMMV, as file at index 0
must actually be a type that supports revisions for this to return anything. You can play with different file IDs until you find something that does.
YMMV, as file at index 0
must actually be a type that has comments for this to return anything. You can play with different file IDs until you find something that does.
NOTE probably blocked by OAuth metadata only scope!!
NOTE probably blocked by OAuth metadata only scope!!
Get file comment ID (accessor path in response .items[0].id
):
Get file comment details (replace file_comment_id
with the corresponding value):
NOTE probably blocked by OAuth metadata only scope!!
YMMV, as above, play with the file comment ID value until you find a file with comments, and a comment that has replies.
NOTE: limited to 10 results, to keep it readable.
NOTE: limited to 10 results, to keep it readable.
Connecting to Microsoft 365 data requires:
creating one Microsoft Entra ID (former Azure Active Directory, AAD) application per Microsoft 365 data source (eg, msft-entra-id
, outlook-mail
, outlook-cal
, etc).
configuring an authentication mechanism to permit each proxy instance to authenticate with the Microsoft Graph API. (since Sept 2022, the supported approach is )
granting to each Entra ID enterprise application to specific scopes of Microsoft 365 data the connection requires.
Steps (1) and (2) are handled by the terraform
examples. To perform them, the machine running terraform
must be authenticated with as a Microsoft Entra ID user with, at minimum, the following role in your Microsoft 365 tenant:
to create/update/delete Entra ID applications and its settings during Terraform apply command.
Please note that this role is the least-privileged role sufficient for this task (creating a Microsoft Entra ID Application), per Microsoft's documentation. See .
This role is needed ONLY for the initial terraform apply
. After each Azure AD enterprise application is created, the user will be set as the owner
of that application, providing ongoing access to read and update the application's settings. At that point, the general role can be removed.
Step (3) is performed via the Microsoft Entra ID web console through an user with administrator permissions. Running the terraform
examples for steps (1)/(2) will generate a document with specific instructions for this administrator. This administrator must have, at minimum, the following role in your Microsoft 365 tenant:
to Consent to application permissions to Microsoft Graph
Again, this is the least-privileged role sufficient for this task, per Microsoft's documentation. See .
Psoxy uses to authenticate with the Microsoft Graph API. This approach avoids the need for any secrets to be exchanged between your Psoxy instances and your Microsoft 365 tenant. Rather, each API request from the proxy to Microsoft Graph API is signed by an identity credential generated in your host cloud platform. You configure your Azure AD application for each connection to trust this identity credential as identifying the application, and Microsoft trusts your host cloud platform (AWS/GCP) as an external identity provider of those credentials.
Neither your proxy instances nor Worklytics ever hold any API key or certificate for your Microsoft 365 tenant.
The following Scopes are required for each connector. Note that they are all READ-only scopes.
NOTE: that Mail.ReadBasic
affords only access to email metadata, not content/attachments.
NOTE: These are all 'Application' scopes, allowing the proxy itself data access as an application, rather than on behalf of a specific authenticated end-user ('Delegated' scopes).
Besides of having OnlineMeetings.Read.All
and OnlineMeetingArtifact.Read.All
scope defined in the application, you need to allow a new role and a policy on the application created for reading OnlineMeetings. You will need Powershell for this.
Please follow the steps below:
NOTE: It can be assigned through Entra Id portal in Azure portal OR in Entra Admin center https://admin.microsoft.com/AdminPortal/Home. It is possible that even login with an admin account in Entra Admin Center the Teams role is not available to assign to any user; if so, please do it through Azure Portal (Entra Id -> Users -> Assign roles)
Run the following commands in Powershell terminal:
And use the user with the "Teams Administrator" for login it.
Add a policy for the application created for the connector, providing its application id
Grant the policy to the whole tenant (NOT to any specific application or user)
Issues:
If you receive "access denied" is because no admin role for Teams has been detected. Please close and reopen the Powershell terminal after assigning the role.
Commands have been tested over a Powershell (7.4.0) terminal in Windows, installed from Microsoft Store and with Teams Module (5.8.0). It might not work on a different environment
If you do not have the 'Cloud Application Administrator' role, someone with that or an alternative role that can create Azure AD applications can create one application per connection and set you as an owner of each.
You can then import
these into your Terraform configuration.
First, try terraform plan | grep 'azuread_application'
to get the Terraform addresses for each application that your configuration will create.
Second, ask your Microsoft admin to create an application for each of those, set you as the owner, and send you the Object ID
for each.
Third, use terraform import <address> <object-id>
to import each application into your Terraform state.
At that point, you can run terraform apply
and it should be able to update the applications with the settings necessary for the proxy to connect to Microsoft Graph API. After that apply, you will still need a Microsoft 365 admin to perform the admin consent step for each application.
See https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/application#import for details.
Psoxy's terraform modules create certificates on your machine, and deploy these to Azure and the keys to your AWS/GCP host environment. This all works via APIs.
Sometimes Azure is a bit finicky about certificate validity dates, and you get an error message like this:
Just running terraform apply
again (and maybe again) usually fixes it. Likely it's something with with Azure's clock relative to your machine, plus whatever flight time is required between cert generation and it being PUT to Azure.
See docs for details. Specifically, the relevant scenario is workload running in either GCP or AWS (your proxy host platform)
Source | Examples | Application Scopes |
---|
NOTE: the above scopes are copied from . They are accurate as of 2023-04-12. Please refer to that module for a definitive list.
Ensure the user you are going to use for running the commands has the "Teams Administrator" role. You can add the role in the
Install module.
Follow steps on :
DEPRECATED - will be removed in v0.5; this is not recommended approach, for a variety of reasons, since Microsoft released support for in ~Sept 2022. See our module azuread-federated-credentials
for preferred alternative.
Example commands (*) that you can use to validate proxy behavior against the Slack Discovery APIs. Follow the steps and change the values to match your configuration when needed.
For AWS, change the role to assume with one with sufficient permissions to call the proxy (-r
flag). Example:
If any call appears to fail, repeat it using the -v
flag.
(*) All commands assume that you are at the root path of the Psoxy project.
Get a workspace ID (accessor path in response .enterprise.teams[0].id
):
Get conversation details of that workspace (replace workspace_id
with the corresponding value):
Get a channel ID (accessor path in response .channels[0].id
):
Get DM information (no workspace):
Read messages for workspace channel:1
Omit the workspace ID if channel is a DM
Omit the workspace ID if channel is a DM
Example test commands that you can use to validate proxy behavior against various source APIs.
Assuming proxy is auth'd as an application, you'll have to replace me
with your MSFT ID or UserPrincipalName (often your email address).
Assuming proxy is auth'd as an application, you'll have to replace me
with your MSFT ID or UserPrincipalName (often your email address).
Assuming proxy is auth'd as an application, you'll have to replace me
with your MSFT ID or UserPrincipalName (often your email address).
NOTE: beta
is used, as Worklytics relies on 'metadata-only' oauth scope Messages.ReadBasic
, which is only supported by that API version.
Before running the example, you have to populate the following variables in terraform:
salesforce_domain
. This is the domain your instance is using.
salesforce_example_account_id
: An example of any account id; this is only applicable for example calls.
Create a Salesforce application + client credentials flow with following permissions:
Manage user data via APIs (api
)
Access Connect REST API resources (chatter_api
)
Perform requests at any time (refresh_token
, offline_access
)
Access unique user identifiers (openid
)
Access Lightning applications (lightning
)
Access content resources (content
)
Perform ANSI SQL queries on Customer Data Platform data (cdp_query_api
)
Apart from Salesforce instructions above, please review the following:
"Callback URL" MUST be filled; can be anything as not required in this flow, but required to be set by Salesforce.
Application MUST be marked with "Enable Client Credentials Flow"
You MUST assign a user for Client Credentials, be sure:
you associate a "run as" user marked with "API Only Permission"
The policy associated to the user MUST have the following Administrative Permissions enabled:
API Enabled
APEX REST Services
The policy MUST have the application marked as "enabled" in "Connected App Access". Otherwise requests will return 401 with INVALID_SESSION_ID
The user set for "run as" on the connector should have, between its Permission Sets
and Profile
, the permission of View All Data
. This is required to support the queries used to retrieve Activity Histories by account id.
Once created, open "Manage Consumer Details"
Update the content of PSOXY_SALESFORCE_CLIENT_ID
from Consumer Key and PSOXY_SALESFORCE_CLIENT_SECRET
from Consumer Secret
Finally, we recommend to run test-salesforce
script with all the queries in the example to ensure the expected information covered by rules can be obtained from Salesforce API. Some test calls may fail with a 400 (bad request) response. That is something expected if parameters requested on the query are not available (for example, running a SOQL query with fields that are NOT present in your model will force a 400 response from Salesforce API). If that is the case, a double check in the function logs can be done to ensure that this is the actual error happening, you should see an error like the following one: json WARNING: Source API Error [{ "message": "\nLastModifiedById,NumberOfEmployees,OwnerId,Ownership,ParentId,Rating,Sic,Type\n ^\nERROR at Row:1:Column:136\nNo such column 'Ownership' on entity 'Account'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names.", "errorCode": "INVALID_FIELD" }]
In that case, removing from the query the fields LastModifiedById,NumberOfEmployees,OwnerId,Ownership,ParentId,Rating,Sic,Type will fix the issues.
However, if running any of the queries you receive a 401/403/500/512. A 401/403 it might be related to some misconfiguration in the Salesforce Application due lack of permissions; a 500/512 it could be related to missing parameter in the function configuration (for example, a missing value for salesforce_domain
variable in your terraform vars) NOTE: derived from worklytics-connector-specs; refer to that for definitive information.
As of May 2023, Atlassian has announced they will stop supporting Jira Server on Feb 15, 2024. Our Jira Server connector is intended to be compatible with Jira Data Center as well.
NOTE: as of Nov 2023, organizations are making production use of this connector; we've left it as alpha due to impending obsolescence of Jira Server.
NOTE: derived from worklytics-connector-specs; refer to that for definitive information.
Follow the instructions to create a Personal Access Token in your instance. As this is coupled to a specific User in Jira, we recommend first creating a dedicated Jira user to be a "Service Account" in effect for the connection (name it svc-worklytics
or something). This will give you better visibility into activity of the data connector as well as avoid connection inadvertently breaking if the Jira user who owns the token is disabled or deleted.
That service account must have READ permissions over your Jira instance, to be able to read issues, worklogs and comments, including their changelog where possible.
If you're required to specify a classical scope, you can add:
read:jira-work
Disable or set a reasonable expiration time for the token. If you set an expiration time, it is your responsibility to re-generate the token and reset it in your host environment to maintain your connection.
Copy the value of the token in PSOXY_JIRA_SERVER_ACCESS_TOKEN
variable as part of AWS System Manager Parameter Store / GCP Cloud Secrets.
Entra ID |
Calendar |
Teams (beta) |
Google Workspace sources can be setup via Terraform, using modules found in our GitHub repo.
As of August 2023, we suggest you use one of our template repo, eg:
Within those, the google-workspace.tf
and google-workspace-variables.tf
files in those repos specify the terraform configuration to use Google Workspace sources.
You (the user running Terraform) must have the following roles (or some of the permissions within them) in the GCP project in which you will provision the OAuth clients that will be used to connect to your Google Workspace data:
Role | Reason |
---|---|
As these are very permissive roles, we recommend that you use a dedicated GCP project so that these roles are scoped just to the Service Accounts used for this deployment. If you used a shared GCP project, these roles would give you access to create keys for ALL the service accounts in the project, for example - which is not good practice.
Additionally, a Google Workspace Admin will need to make a Domain-wide Delegation grant to the Oauth Clients you create. This is done via the Google Workspace Admin console. In default setup, this requires Super Admin role, but your organization may have a Custom Role with sufficient privileges.
We also recommend you create a dedicated Google Workspace user for Psoxy to use when connecting to your Google Workspace Admin API, with the specific permissions needed. This avoids the connection being dependent on a given human user's permissions and improves transparency.
This is not to be confused with a GCP Service Account. Rather, this is a regular Google Workspace user account, but intended to be assigned to a service rather than a human user. Your proxy instance will impersonate this user when accessing the Google Admin Directory and Reports APIs. (Google requires that these be accessed via impersonation of a Google user account, rather than directly using a GCP service account).
We recommend naming the account svc-worklytics@{your-domain.com}
.
If you have already created a sufficiently privileged service account user for a different Google Workspace connection, you can re-use that one.
Assign the account a sufficiently privileged role. At minimum, the role must have the following privileges, read-only:
Admin API
Domain Settings
Groups
Organizational Units
Reports (required only if you are connecting to the Audit Logs, used for Google Chat, Meet, etc)
Users
Those refer to Google's documentation, as shown below (as of Aug 2023); you can refer there for more details about these privileges.
The email address of the account you created will be used when creating the data connection to the Google Directory in the Worklytics portal. Provide it as the value of the 'Google Account to Use for Connection' setting when they create the connection.
If you choose not to use a predefined role that covers the above, you can define a Custom Role.
Using a Custom Role, with 'Read' access to each of the required Admin API privileges is good practice, but least-privilege is also enforced in TWO additional ways:
the Proxy API rules restrict the API endpoints that Worklytics can access, as well as the HTTP methods that may be used. This enforces read-only access, limited to the required data types (and actually even more granular that what Workspace Admin privileges and OAuth Scopes support).
the Oauth Scopes granted to the API client via Domain-wide delegation. Each OAuth Client used by Worklytics is granted only read-only scopes, least-permissive for the data types required. eg https://www.googleapis.com/auth/admin.directory.users.readonly
.
So a least-privileged custom role is essentially a 3rd layer of enforcement.
In the Google Workspace Admin Console as of August 2023, creating a 'Custom Role' for this user will look something like the following:
YMMV - Google's UI changes frequently and varies by Google Workspace edition, so you may see more or fewer options than shown above. Please scroll the list of privileges to ensure you grant READ access to API for all of the required data.
Google Workspace APIs use OAuth 2.0 for authentication and authorization. You create an Oauth 2.0 client in Google Cloud Platform and a credential (service account key), which you store in as a secret in your Proxy instance.
When the proxy connects to Google, it first authenticates with Google API using this secret (a service account key) by signing a request for a short-lived access token. Google returns this access token, which the proxy then uses for subsequent requests to Google's APIS until the token expires.
The service account key can be rotated at any time, and the terraform configuration examples we provide can be configured to do this for you if applied regularly.
More information: https://developers.google.com/workspace/guides/auth-overview
To initially authorize each connector, a sufficiently privileged Google Workspace Admin must make a Domain-wide Delegation grant to the Oauth Client you create, by pasting its numeric ID and a CSV of of the required OAuth Scopes into the Google Workspace Admin console. This is a one-time setup step.
If you use the provided Terraform modules (namely, google-workspace-dwd-connection
), a TODO file with detailed instructions will be created for you, including the actual numeric ID and scopes required.
Note that while Domain-wide Delegation is a broad grant of data access, the implementation of it in proxy is mitigated in several ways because the GCP Service Account resides in your own GCP project, and remains under your organizes control - unlike the most common Domain-wide Delegation scenarios which have been the subject of criticism by security researchers. In particular:
you may directly verify the numeric ID of the service account in the GCP web console, or via the GCP CLI; you don't need to take our word for it.
you may monitor and log the use of each service account and its key as you see fit.
you can ensure there is never more than one active key for each service account, and rotate keys at any time.
the key is only used from infrastructure (GCP CLoud Function or Lambda) in your environment; you should be able to reconcile logs and usage between your GCP and AWS environments should you desire to ensure there has been no malicious use of the key.
While not recommend, it is possibly to set up Google API clients without Terraform, via the GCP web console:
Create or choose the GCP project in which to create the OAuth Clients.
Activate relevant API(s) in the project.
Create a Service Account and a JSON key for the service account.
Base64-encode the key and store it as a Systems Manager Parameter in AWS (same region as your lambda function deployed). The parameter name should be something like PSOXY_GDIRECTORY_SERVICE_ACCOUNT_KEY
. Ensure you do inadvertently add extra characters, including whitespace, when copying-pasting the key value.
Get the numeric ID of the service account. Use this plus the oauth scopes to make domain-wide delegation grants via the Google Workspace admin console.
NOTE: you could also use a single Service Account for everything, but you will need to store it's key repeatedly in AWS/GCP as the SERVICE_ACCOUNT_KEY
for each of your Google Workspace connections.
If you remain uncomfortable with Domain-wide Delegation, a private Google Marketplace App is a possible, if tedious and harder to maintain, alternative. Here are some trade-offs:
Pros:
Google Workspace Admins may perform a single Marketplace installation, instead of multiple DWD grants via the admin console
"install" from the Google Workspace Marketplace is less error-prone/exploitable than copy-paste a numeric service account ID
visual confirmation of the oauth scopes being granted by the install
ability to "install" for a Org Unit, rather than the entire domain
Cons:
you must use a dedicated GCP project for the Marketplace App; "installation" of a Google Marketplace App grants all the service accounts in the project access to the listed oauth scopes. You must undeterstand the the OAuth grant is to the project, not a specific service account.
you must enable additional APIs in the GCP project (marketplace SDK).
as of Dec 2023, Marketplace Apps cannot be completely managed by Terraform resources; so there are more out-of-band steps that someone must complete by hand to create the App; and a simple terraform destroy
will not remove the associated infrastructure. In contrast, terraform destroy
in the DWD approach will result in revocation of the access grants when the service account is deleted.
You must monitor how many service accounts exist in the project and ensure only the expected ons are created. Note that all Google Workspace API access, as of Dec 2023, requires the service account to authenticate with a key; so any SA without a key provisioned cannot access your data.
For enabling Slack Discovery with the Psoxy you must first set up an app on your Slack Enterprise instance.
Go to https://api.slack.com/apps and create an app.
Select "From scratch", choose a name (for example "Worklytics connector") and a development workspace
Take note of your App ID (listed in "App Credentials"), contact your Slack representative and ask them to enable discovery:read
scope for that App ID. If they also enable discovery:write
then delete it for safety, the app just needs read access.
The next step depends on your installation approach you might need to change slightly
Use this step if you want to install in the whole org, across multiple workspaces.
Add a bot scope (not really used, but Slack doesn't allow org-wide installations without a bot scope). The app won't use it at all. Just add for example the users:read
scope, read-only.
Under "Settings > Manage Distribution > Enable Org-Wide App installation", click on "Opt into Org Level Apps", agree and continue. This allows to distribute the app internally on your organization, to be clear it has nothing to do with public distribution or Slack app directory.
Generate the following URL replacing the placeholder for YOUR_CLIENT_ID and save it for
https://api.slack.com/api/oauth.v2.access?client_id=YOUR_CLIENT_ID
Go to "OAuth & Permissions" and add the previous URL as "Redirect URLs"
Go to "Settings > Install App", and choose "Install to Organization". A Slack admin should grant the app the permissions and the app will be installed.
Copy the "User OAuth Token" (also listed under "OAuth & Permissions") and store as PSOXY_SLACK_DISCOVERY_API_ACCESS_TOKEN
in the psoxy's Secret Manager. Otherwise, share the token with the AWS/GCP administrator completing the implementation.
Use this steps if you intend to install in just one workspace within your org.
Go to "Settings > Install App", click on "Install into workspace"
Copy the "User OAuth Token" (also listed under "OAuth & Permissions") and store as PSOXY_SLACK_DISCOVERY_API_ACCESS_TOKEN
in the psoxy's Secret Manager. Otherwise, share the token with the AWS/GCP administrator completing the implementation.
beta As an alternative to connecting Worklytics to the Slack Discovery API via the proxy, it is possible to use the bulk-mode of the proxy to sanitize an export of Slack Discovery data and ingest the resulting sanitized data to Worklytics. Example data of this is given in the example-bulk/
folder.
This data can be processing using custom multi-file type rules in the proxy, of which discovery-bulk.yaml
is an example.
For clarity, example files are NOT compressed, so don't have .gz
extension; but rules expect .gz
.
As of July 2023, pulling historical data (last 6 months) and all scheduled and instant meetings requires a Zoom paid account on Pro or higher plan (Business, Business Plus). On other plans Zoom data may be incomplete.
Accounts on unpaid plans do not have access to some methods Worklytics use like:
Zoom Reports API -required for historical data
certain Zoom Meeting API methods such as retrieving past meeting participants
The Zoom connector through Psoxy requires a Custom Managed App on the Zoom Marketplace. This app may be left in development mode; it does not need to be published.
Go to https://marketplace.zoom.us/develop/create and create an app of type "Server to Server OAuth" for creating a server-to-server app.
After creation, it will show the App Credentials.
Copy the following values:
Account ID
Client ID
Client Secret
Share them with the AWS/GCP administrator, who should fill them in your host platform's secret manager (AWS Systems Manager Parameter Store / GCP Secret Manager) for use by the proxy when authenticating with the Zoom API:
Account ID
--> PSOXY_ZOOM_ACCOUNT_ID
Client ID
--> PSOXY_ZOOM_CLIENT_ID
Client Secret
--> PSOXY_ZOOM_CLIENT_SECRET
NOTE: Anytime the Client Secret is regenerated it needs to be updated in the Proxy too. NOTE: Client Secret should be handled according to your organization's security policies for API keys/secrets as, in combination with the above, allows access to your organization's data.
Fill the 'Information' section. Zoom requires company name, developer name, and developer email to activate the app.
No changes are needed in the 'Features' section. Continue.
Fill the scopes section clicking on + Add Scopes
and adding the following:
meeting:read:past_meeting:admin
meeting:read:meeting:admin
meeting:read:list_past_participants:admin
meeting:read:list_past_instances:admin
meeting:read:list_meetings:admin
meeting:read:participant:admin
report:read:list_meeting_participants:admin
report:read:meeting:admin
report:read:user:admin
user:read:user:admin
user:read:list_users:admin
Alternatively, the scopes: user:read:admin
, meeting:read:admin
, report:read:admin
are sufficient, but as of May 2024 are no longer available for newly created Zoom apps.
Once the scopes are added, click on Done
and then Continue
.
Activate the app
Example commands (*) that you can use to validate proxy behavior against the Zoom APIs. Follow the steps and change the values to match your configuration when needed.
For AWS, change the role to assume with one with sufficient permissions to call the proxy (-r
flag). Example:
If any call appears to fail, repeat it using the -v
flag.
(*) All commands assume that you are at the root path of the Psoxy project.
Now pull out a user id ([zoom_user_id]
, accessor path in response .users[0].id
). Next call is bound to a single user:
First pull out a meeting id ([zoom_meeting_id]
, accessor path in response .meetings[0].id
):
NOTE: This is for the Cloud-hosted version of Jira; for the self-hosted version, see .
NOTE: These instructions are derived from ; refer to that for definitive information.
Jira Cloud through Psoxy uses Jira OAuth 2.0 (3LO), which a Jira Cloud (user) account with following classical scopes:
read:jira-user
: for getting generic user information
read:jira-work
: for getting information about issues, comments, etc
And following granular scopes:
read:account
: for getting user emails
read:group:jira
: for retrieving group members
read:avatar:jira
: for retrieving group members
You will need a web browser and a terminal with curl
available (such as macOS terminal, Linux, an AWS CLoud Shell, , etc)
Go to https://developer.atlassian.com/console/myapps/ and click on "Create" and choose "OAuth 2.0 Integration"
Then click "Authorization" and "Add" on OAuth 2.0 (3L0)
, adding http://localhost
as callback URI. It can be any URL that matches the URL format and it is required to be populated, but the proxy instance workflow will not use it.
Now navigate to "Permissions" and click on "Add" for Jira API
. Once added, click on "Configure". Add following scopes as part of "Classic Scopes", first clicking on Edit Scopes
and then selecting them:
read:jira-user
read:jira-work
And these from "Granular Scopes":
read:group:jira
read:avatar:jira
read:user:jira
Then go back to "Permissions" and click on "Add" for User Identity API
, only selecting following scopes:
read:account
After adding all the scopes, you should have 1 permission for User Identity API
and 5 for Jira API
:
Once Configured, go to "Settings" and copy the "Client Id" and "Secret". You will use these to obtain an OAuth refresh_token
.
Build an OAuth authorization endpoint URL by copying the value for "Client Id" obtained in the previous step into the URL below. Then open the result in a web browser:
https://auth.atlassian.com/authorize?audience=api.atlassian.com&client_id=<CLIENT ID>&scope=offline_access%20read:group:jira%20read:avatar:jira%20read:user:jira%20read:account%20read:jira-user%20read:jira-work&redirect_uri=http://localhost&state=YOUR_USER_BOUND_VALUE&response_type=code&prompt=consent
6. Choose a site in your Jira workspace to allow access for this application and click "Accept". As the callback does not exist, you will see an error. But in the URL of your browser you will see something like this as URL:
http://localhost/?state=YOUR_USER_BOUND_VALUE&code=eyJhbGc...
Copy the value of the code
parameter from that URI. It is the "authorization code" required for next step.
NOTE This "Authorization Code" is single-use; if it expires or is used, you will need to obtain a new code by again pasting the authorization URL in the browser.
Now, replace the values in following URL and run it from command line in your terminal. Replace YOUR_AUTHENTICATION_CODE
, YOUR_CLIENT_ID
and YOUR_CLIENT_SECRET
in the placeholders:
curl --request POST --url 'https://auth.atlassian.com/oauth/token' --header 'Content-Type: application/json' --data '{"grant_type": "authorization_code","client_id": "YOUR_CLIENT_ID","client_secret": "YOUR_CLIENT_SECRET", "code": "YOUR_AUTHENTICATION_CODE", "redirect_uri": "http://localhost"}'
Set the following variables in AWS System Manager parameters store / GCP Cloud Secrets (if default implementation):
PSOXY_JIRA_CLOUD_ACCESS_TOKEN
secret variable with value of access_token
received in previous response
PSOXY_JIRA_CLOUD_REFRESH_TOKEN
secret variable with value of refresh_token
received in previous response
PSOXY_JIRA_CLOUD_CLIENT_ID
with Client Id
value.
PSOXY_JIRA_CLOUD_CLIENT_SECRET
with Client Secret
value.
Obtain the "Cloud ID" of your Jira instance. Use the following command, with the access_token
obtained in the previous step in place of <ACCESS_TOKEN>
below:
curl --header 'Authorization: Bearer <ACCESS_TOKEN>' --url 'https://api.atlassian.com/oauth/token/accessible-resources'
And its response will be something like:
Add the id
value from that JSON response as the value of the jira_cloud_id
variable in the terraform.tfvars
file of your Terraform configuration. This will generate all the test URLs with a proper value and it will populate the right value for setting up the configuration.
NOTE: A "token family" includes the initial access/refresh tokens generated above as well as all subsequent access/refresh tokens that Jira returns to any future token refresh requests. By default, Jira enforces a maximum lifetime of 1 year for each token family. So you MUST repeat steps 5-9 at least annually or your proxy instance will stop working when the token family expires.
-
-
-
-
After running that command, if successful you will see a like this:
create Service Accounts to be used as API clients
to access Google Workspace API, proxy must be authenticated by a key that you need to create
you will need to enable the Google Workspace APIs in your GCP Project