Skip to main content

45 posts tagged with "sprint"

View All Tags

Β· One min read

Work In Progress​

  • Discussed the process of bucket and object encryption in AWS and determined that a key must be offered.
  • The team will circle back with the development team on the use of KMS .
  • The role assigned for S3 bucket encryption needs to be examined and potentially edited to include specific KMS actions.

Additional Notes​

  • The team discussed an earlier start give more time and speed up progress.

Tracking​

  • Cases open: 0
  • Cases closed: 5
    • case TS012906539
    • case TS013042929
    • case TS012831699
    • case TS012704616
    • case TS012702956
  • ibm-client-engineering/solution-sfg-aws#17
  • This flight log is being submitted via PR "06/14/2023 Documentation".

Β· One min read

Work In Progress​

  • For S3 KMS encryption support we are testing adding kms key permissions (kms:GenerateDataKey for upload/kms:Decrypt for get) to the role assigned via the service account.
  • Waiting for Customer to complete a cluster upgrade and verify that RDS is accessible.
    **Customer followed up in an email stating they have upgraded the EKS cluster and SFG is up and running now.

Issues & Challenges​

  • RDS access has been blocked at the DNS level. Customer was told that the AMIs used in the EKS cluster are now out of date and need to be upgraded

Tracking​

  • Cases open: 0
  • Cases closed: 5
    • case TS012906539
    • case TS013042929
    • case TS012831699
    • case TS012704616
    • case TS012702956
  • ibm-client-engineering/solution-sfg-aws#17
  • This flight log is being submitted via PR "06/14/2023 Documentation".

Β· One min read

Work In Progress​

  • The team speculated that the DNS was not resolving and it could be a potential issue with AWS. The Customer said he would go back and work with his team on the issue.
  • Our team would collaborate closely with the DEV team to effectively address and strategize the implementation of a KMS solution.

Completed Today​

  • The team checked and validated the network policies.
  • Curl attempt was made to RDS, but it could not resolve host.

Issues & Challenges​

  • The team was troubleshooting a connection to the RDS Database. Does not appear that any changes were applied that would cause this problem and the connection problem started recently.
  • After some discussion, the team learned that KMS is mandatory for all S3 buckets within the clients environment.

Tracking​

Β· 2 min read

Work In Progress​

  • Its suspected the current bucket policy is blocking access. Customer would work with their AWS team to alter the existing bucket policy or create a bucket without KMS.

Completed Today​

  • Added Network Policy to allow traffic on port 443. This is the default port S3 listens on.
  • Checked and verified IAM permissions within the AWS Console.
  • Checked the service account within the cluster and re-annotated.

Issues & Challenges​

  • Multiple attempts were made to upload a file to S3. However, S3 Service consistently responded with a "Permission Denied". After some investigation, it appeared KMS was enabled and blocking access.

Additional Notes​

Team followed up on a question from the previous session:

  • The team discussed the best way to configure connections for transferring files from multiple mainframe jobs to different S3 buckets. The team suggested using different consumers in File Gateway, each with its own bucket, and setting up static or dynamic routes to determine the destination based on file names or other criteria. They also mention the option of having multiple CD (Connect Direct) producers, each with different credentials, to handle the transfers to specific buckets.

SFG Producer_Consumer Diagram

The team provided additional documentation on this matter:
https://www.ibm.com/docs/en/b2b-integrator/6.1.2?topic=channels-about-routing
https://www.ibm.com/docs/en/b2b-integrator/6.1.2?topic=channels-about-routing-channel-templates

Tracking​

Β· One min read

Work In Progress​

  • Investigating VPC S3 endpoint notation for the AWSS3Put business process.

Completed Today​

  • Edited S3 business process and added AWS region.
  • Checked logs for previous S3 upload attempts.

Issues & Challenges​

  • After the S3 Business Process was edited, another attempt was made to push a file to S3.

Additional Notes​

Team followed up on a question from the previous session:

  • The question presents a scenario with three mainframe connections that need to be routed to three different buckets. The team suggested an approach using File Gateway to streamline the configuration. It recommends creating three producer partners (MF1, MF2, MF3) and three consumer partners (S3_1, S3_2, S3_3) for listening connections using the S3 protocol. Three routing channels are then created, each using a template to connect a specific mainframe producer to its corresponding S3 consumer.

Q. Customer had a question about the location of files uploaded with Connect:Direct

Tracking​

Β· One min read

Work In Progress​

  • Because of the specifications of the customer's environment, it might be necessary to access the S3 bucket through a VPC Endpoint. The team is investigating the implementation of VPC Endpoints in the Business Process.

Completed Today​

  • A Business Process was created for the customer's S3 bucket.

Issues & Challenges​

  • Startup Probe threshold was already updated in the STS for the ASI server pod. However, the ASI server pod had to be scaled up and down in order to start the pod.
  • There was an attempt to push a file to the customer's S3 bucket. This was unsuccessful and team suspected that the S3 bucket needed to be accessed through a VPC Endpoint.

Additional Notes​

  • There was a question about multiple mainframes that went through a different bucket for each, would there have to be a Business Process created for each or could it be done in one Process.

Tracking​

Β· One min read

Key Accomplishments​

  • We ran a helm chart update and it failed on the patched 2.1.1 since it was missing the pullsecret entry for the preinstall-tls job
  • Applied series of patch commands in order to run the helm upgrade.
  • Updated the helm chart to add an annotation for the pull-secret

Challenges​

  • Received multiple errors during the helm upgrade

Up Next​

  • Need to update patches for helm charts 2.1.1 to add pull secrets to preinstall-tls job container
  • We need to get the business process set up to access S3
  • Will need to update the service account entry in the overrides to the created S3 service account that has the role annotated.
  • Aneesh will re-run the helm upgrade offline and we will check in on the next working session

Tracking​

Β· One min read

Key Accomplishments​

  • Successfully applied AWS role and policy to allow access to S3 buckets in our reference env.
  • Successfully demoed Sterling Secure Proxy
  • Introduced customer team to new Document site
  • Updated the ALB idle timeout in the ingress annotations in the overrides
    • Verified in our reference environment that this solves a Gateway 403 error when running an S3 business process and updated our documentation. PR#37
    • Updated the customer's ingress annotations in their overrides and applied to their environment
  • Was able to verify with the customer that the following were configured and enabled:
    • OIDC provider assigned to cluster
    • IAM policy for S3 configured for their test bucket
    • Service account was created in the cluster
    • IAM policy was attached to the appropriate role created
    • Service account in cluster was annotated with the role.

Up Next​

  • Build a Business Process that uploads a file to the S3 bucket and verify the file was successfully uploaded.

Tracking​

Β· One min read

Key Accomplishments​

  • Applying a specific netpolicy for RDS outbound port 1521 solved the RDS communication issue

Lessons Learned​

  • Verified the issue lay with the network policies applied by the helm chart by labeling the oracle client pod and observing the connectivity failure
  • Deleted all ns network policies and now the app pods are coming up

Action Items​

Up Next​

  • Update the overrides file to add in the RDS network policy

Metrics​

Notes​

Tracking​

Β· One min read

Key Accomplishments​

  • Adjusting resource quotas for AC and ASI for the namespace
  • Max hard quotas have been bumped up to 32G for memory, 20 for CPU
  • namespace quota is for all pods not per pod, so we doubled the numbers and applied
  • Delete the deployment and adjust the overrides to disable dbsetup (assuming db was successfully configured)
  • Applied the secret name for each ingress that was generated by the tls job. This is temporary as they should be able to get their certs via an annotation once it’s available
  • Nodes appeared to scale up, need to see the events log

Lessons Learned​

Action Items​

Up Next​

Metrics​

Notes​

Tracking​