Skip to main content

45 posts tagged with "log"

View All Tags

ยท One min read

Dateโ€‹

Flight Log contain information relating to steps completed between 05/30/2023

Key Accomplishmentsโ€‹

  • CPE was successfully bootstrapped and object store was created/initialized
  • We were able to port forward to login to ACCE until dev team finds the root cause of Ingress errors.

Challengesโ€‹

  • Ingress update: the IBM Development team is still working on getting this issue resolved. This is the first time errors like this have come up, so they are going through logs to get to the root of the issue.

Action Itemsโ€‹

  • We are going to start having daily working sessions until this is complete.
  • In our meeting tomorrow, we will work on bootstrapping the Navigator.

Trackingโ€‹

ยท 2 min read

Dateโ€‹

Flight Logs contain information relating to steps completed between 05/22/23 - 05/26/23

Key Accomplishmentsโ€‹

  • Redesigned our openldap implementation to take the following into account:
    • CPE is happier with an openldap implementation that seems like IBM TDS.
    • Added annotations to the CR to reflect a TDS installation despite still being an openldap deployment
    • Removed any stateful storage from the Openldap deployment
    • Added a schema ldif to add specific TDS annotations to the users ldif and updated documentation to reflect those changes - PR#13
  • Updated the ibm-fncm-secret with the correct user (cpadmin vs ldap_admin) in customer environment.
  • Successfully boostrapped CPE in the customer environment by deleting the previous fncmcluster and re-applying the CR.

Challengesโ€‹

  • Resource restrictions were rearing their ugly head when changes were made to the CR and applied. Apparently the behavior of the when pods need to be redeployed is to actually deploy the new pods and then terminate the old ones once the new ones were up. This was causing violations of the resource quotas.
  • Ingress still appears to be broken in both our reference environment and the customer environment. Still tracking this in TS013093278.

Action Itemsโ€‹

  • Continue working with development to solve the ingress issue.
  • They are sending our error logs and events to web sphere team to see if they can find the root cause of these errors

Trackingโ€‹

ยท 2 min read

Dateโ€‹

Flight Logs contain information relating to steps completed between 05/18 - 05/19

Key Accomplishmentsโ€‹

  • Successfully edited the configuration information within the CR. This information included fields under the sections 'shared_configuration', 'initialize_configuration', as well as the requested resources.

Challengesโ€‹

  • CPE initialization failed
  • After applying the new CR we received errors due to something preventing the Ingress from bootstrapping acce. Whenever we would login to ACCE, we would be presented with a blank screen

Lessons Learnedโ€‹

  • Operator deployment takes care of requesting resources for new containers. Operator deployment creates an initialization container that spins up and it does not have the ability to set up CPU and memory limits. In the future, managing resources in environment can take care of this issue.
  • When applying the Ingress, give the route 53 DNS operator time to pick up the correct host name before trying to access it through the browser.
  • When accessing the host, we need to create a certificate to give it a secure connection. This will prevent any "insecure connection" connection loops.

Action Itemsโ€‹

  • Followup with engineering to allow RW root fs to allow Dynatrace to work

Up Nextโ€‹

  • Use the Operator to bootstrap the gcd domain and object store and then create a navigator desktop using the CR file.

Trackingโ€‹

  • Flight log was added by PR 5/30/2023

ยท 2 min read

Dateโ€‹

Flight Logs contain information relating to steps completed between 05/09 - 05/12

Key Accomplishmentsโ€‹

  • Worked with engineering to fix the resource issues with the original operator build and successfully deployed operator with correct resources sizes.
  • Successfully applied CR to point to the correct repo for navigator image after patching daemonset.
  • Successfully got the Ingress to work and connect to the host through the browser on a secure connection.

Challengesโ€‹

  • This customer environment requires the resource restrictions set into any container spun up. The temporary job pod which uses the operator limit does not contain any mechanism to set these restrictions.
  • The customer was having issues accessing the newly made Operator image, due to registry access permissions. We had to push image to a public registry with the tag trv2202 for the customer to pull and then have them push it to their own private registry.
  • We had issues bringing the Filenet pods online after successfully getting the new Operator image in the client environment.
  • Folder-prepare-container kept erroring out due to us implementing a readOnlyRootFilesystem and prevented Dynatrace.
  • We had issues connecting to the host through the browser when trying to get the Ingress to work. We thought this was due to the host name being in the wrong location in the YAML file, but it was actually due to the route 53 external DNS operator taking some time to pick everything up.

Lessons Learnedโ€‹

  • Operator deployment takes care of requesting resources for new containers. Operator deployment creates an initialization container that spins up and it does not have the ability to set up CPU and memory limits. In the future, managing resources in environment can take care of this issue.
  • When applying the Ingress, give the route 53 DNS operator time to pick up the correct host name before trying to access it through the browser.
  • When accessing the host, we need to create a certificate to give it a secure connection. This will prevent any "insecure connection" connection loops.

Action Itemsโ€‹

  • Followup with engineering to allow RW root fs to allow Dynatrace to work

Up Nextโ€‹

  • Use the Operator to bootstrap the gcd domain and object store and then create a navigator desktop using the CR file.

ยท 3 min read

Key Accomplishmentsโ€‹

  • In a collaborative effort between the customer and our Client Engineering team, we achieved a successful documentation and deployment of OpenLDAP into the customer's environment.
  • Through collaboration between the customer and our Client Engineering team, we successfully deployed the FileNet Operator into the customer's AWS EKS environment.

Challengesโ€‹

  • PreStaging: While collaborating, we took the opportunity to work deeply with the customer in pre-staging their environment and helping to educate them in product requirements for both software and environment. See #2

  • Reference Environment: During our collaboration with the customer, we staged an internal cluster to mirror their environment as much as possible in order to faciliate a smooth transfer of knowledge between us. See #3

  • Different Environment: During our collaboration, we worked together with the customer to deploy FileNet FNCM in a shared AWS EKS environment. However, we acknowledged that each environment is unique and may require specific considerations. For instance, the customer was already utilizing Kynverno as a cluster policy manager.

  • Private Registry: To ensure smooth integration, we worked with the customer to identify all the necessary images for FileNet, including Postgres and OpenLDAP. This information was crucial for them to pre-stage their private repository since external traffic was not permitted in their cluster. See #6

  • Cluster Privileges: A collaborative approach required us to determine the cluster privileges available to the customer within their environment. By understanding their permissions, we could effectively align our efforts and ensure seamless integration.

  • Resource Quota : As part of our combined efforts, we recognized that the cluster was created through an automated process, which automatically assigned namespace resource quotas. This allowed us to optimize resource allocation and ensure efficient usage within the shared environment. See #8

  • Operator Image: During our collaboration, we encountered a blocker in the customer's environment where the default image did not set resource quotas for temporary job containers. Working with the internal dev team we were able to get a hotfix in place for the operator image. See #9 and Development Collaboration Slack Thread also referenced See #8

Lessons Learnedโ€‹

  • Empowering Education: A key aspect of our collaborative approach was to provide the customer with valuable resources to empower themselves. We shared links that enabled them to proactively learn about Kubernetes, AWS, and even utilize an AWS Sandbox for hands-on practice. This proactive learning approach set a strong foundation and allowed us to make significant progress upon our onsite engagement.

  • Comprehensive Cluster Privilege Guidance: While working together, we recognized that certain privileges were necessary for a smooth installation of the FileNet operator. To ensure a seamless experience for future customers, we took the initiative to identify and compile a comprehensive list of required cluster privileges. By sharing this list, we aimed to minimize any potential roadblocks and foster a more efficient onsite collaboration.

Action Itemsโ€‹

  • ReadOnlyRootFileSystem - FCN 5.5.10 has readOnlyRootFilesystem implemented as part of security improvements applied to the container image. This causes problems as the customer uses Dynatrace in their environment and these folders cannot be copied thus causing the setup jobs to fail when deploying the FileNet pods. For now we are asking the customer to disable Dynatrace. See #8
  • RFE to be opened - CSFN-I-167

Up Nextโ€‹

  • Successfully deploying the CR to the FileNet operator See #8

Metricsโ€‹

Notesโ€‹

Trackingโ€‹