Encontrar

Article
· Mars 13, 2024 5m de lecture

OpenTelemetry Traces from IRIS implemented SOAP Web Services

A customer recently asked if IRIS supported OpenTelemetry as they where seeking to measure the time that IRIS implemented SOAP Services take to complete. The customer already has several other technologies that support OpenTelemetry for process tracing.  At this time, InterSystems IRIS (IRIS) do not natively support OpenTelemetry.  

It's fair to say that IRIS data platform has several ways to capture, log and analyse the performance of a running instance, this information does not flow out of IRIS through to other opentelemetry components like Agents or Collectors within an implemented OpenTelemetry architecture.  Several technologies already support OpenTelemetry which seems to bet becoming a defacto standard for Observability.

Whilst there is ongoing development to natively support this capability in future IRIS releases, this article explains how, with the help of the Embedded Python and the corresponding Python libraries, IRIS application developers can start publishing Trace events to your OpenTelemetry back-ends with minimal effort.  More importantly, this gives my customer something to get up and running with today. 

 

Observability. 

Observability in generally comprises three main aspects:

  • Metrics capture, which is the capture of quantitative measuremements about the performance and behaviour of a system, similar to what IRIS publishes via its /api/monitor/metrics api
  • Logging, which involves capturing and storing relevant information generated by an application or system, such as what appears in System Log outputs, or messages.log file generated by IRIS instances.
  • Tracing: which involves tracking the flow of a service request or transaction as it moves through various components of a solution. Distributed tracing allows you to follow the path of a request across multiple services, providing a visual representation of the entire transaction flow.

This article and accompanying application found here, focuses solely on Tracing oSOAP Services.

A Trace identifies an operation within a solution that, in fact, can be satisfied via multiple technologies in an architecture, such as browser, load balance, web server, database server, etc.
A Span represents a single unit of work, such as a database update, or database query. A span is the building block of a Trace, and a Trace starts with a root Span, and optionally nested, or siblinkg spans.

In this implementation which is only using IRIS as the technology to generate telemetry, a Trace and root Span is started when the SOAP Service is started.

Approach for implementation:

Subclass IRIS's %SOAP.WebService class with OpenTelemetry implementation logic and Python library functions in a new calss called SOAP.WebService. Include Macros that can be used in user code to further contribute to observability and tracing. Minimal changes to the existing SOAP implementation should be needed (replace use of %SOAP.WebService with SOAP.WebService as the Web Service superclass for implementing SOAP.
The diagram below illustrates this approach:

 

Features of this implementation:

  • By default, every SOAP Service will be tracked and reports trace information.
  • When a SOAP Service is used for the first time, the implementation will initalise an OpenTelemetry Tracer object. A combination of the IRIS Server name and Instance is provided as the telemetry source, and, the SOAP Action used as th name for the default root span tracking the soap service.
  • Telemetry traces and the default span will be automatically closed when the SOAP method call ends
  • Upon creation, Key/Value pairs of attributes can be added to the default root span, such as, CSP Session ID, or Job number
  • Users may use the $$$OTELLog(...), to add an arbitratry manual logging into a span, using a simple string or array of key valye pairs 
  • Users may use the $$$OTELPushChildSpan(...)/$$$OTELPopChildSpan(...) to create non-root spans around sections of code which they want to independantly identify with their logic

 

Installation and testing

  • Clone/git pull the repo into any local directory
$ git clone https://github.com/pisani/opentelemetry-trace-soap.git
  • Open a terminal window in this directory and type the following to build the IRIS images with sample code:
$ docker-compose build
  • Once the iris image is build, in the same directory type the following to start up the Jaeger and IRIS containers:
$ docker-compose up -d

This will startup two containers - the Jaeger OpenTelemetry target backend container (also exposing a user interface), and, an instance of IRIS which will serve as the SOAP Web Services server endpoint.  Three simple webservices have been developed in the IRIS instance for testing the solution.

 

  • Using your browser access the SOAP Information and testing pages via this URL. logging in as superuser/SYS if prompted:
http://localhost:52773/csp/irisapp/SOAP.MyService.cls

(Note: These pages are not enabled by default and security within the running IRIS instance had to be relaxed to enable this feature, for ease of testing)

Select each of the web methods you want to test, in order to generate SOAP activity.  To see this implementation generate an Error in the observed traces, use zero (0) as the second number in the Divide() SOAP method in order to force a <DIVDE> error.

  • Open another browser tab pull up the Jaeger UI via the following URL
http://localhost:16686
  • The resulting landing page shows you all services contributing telemetry readings and should look something similar to the screenshot below:  

 

Conclusion

In summary, this article demonstrates how Embedded Python, could be used to add additional features to IRIS, in my case, to implement Observability tracing for SOAP services.  The options available via Python and IRIS's ability to leverage these Python libraries is truely.

I recognise that work can be undertaken to create a more generic OpenTelemetrySupport class that implements the same for REST services, as well as extending current Class Method signatures to tracking timing of any Class method through this framework.

5 Comments
Discussion (5)3
Connectez-vous ou inscrivez-vous pour continuer
Question
· Mars 13, 2024

How to send PUT using HS.FHIR.DTL.Util.HC.SDA3.FHIR.Process?

Dear,

I'm trying to configure a new interface that reads HL7, transform them into FHIR messages and then send POST or PUT or DELETE depending on HL7 doc type.

1-I added an HL7 TCP service that reads ADTs messages

2a-Send ADTs to a process to transform them into SDA  (using the following command:  do ##class(HS.Gateway.HL7.HL7ToSDA3).GetSDA(request,.con))

2b-Extract the patient MRN and add it to the AdditionalInfo property  (using the following request message class: HS.Message.XMLMessage)

3-Send the SDA message to the built in process: HS.FHIR.DTL.Util.HC.SDA3.FHIR.Process.

4-Send FHIR request to HS.FHIRServer.Interop.Operation

My only problem is that every ADT message is being transformed to a POST FHIR message.

Can you help me to control the method that is being used (POST)? and t change it to a PUT or DELETE when needed? 

5 Comments
Discussion (5)2
Connectez-vous ou inscrivez-vous pour continuer
Article
· Mars 12, 2024 5m de lecture

Orchestrating Secure Management Access in InterSystems IRIS with AWS EKS and ALB

As an IT and cloud team manager with 18 years of experience with InterSystems technologies, I recently led our team in the transformation of our traditional on-premises ERP system to a cloud-based solution. We embarked on deploying InterSystems IRIS within a Kubernetes environment on AWS EKS, aiming to achieve a scalable, performant, and secure system. Central to this endeavor was the utilization of the AWS Application Load Balancer (ALB) as our ingress controller. 

However, our challenge extended beyond the initial cluster and application deployment; we needed to establish an efficient and secure method to manage the various IRIS instances, particularly when employing mirroring for high availability.

This post will focus on the centralized management solution we implemented to address this challenge. By leveraging the capabilities of AWS EKS and ALB, we developed a robust architecture that allowed us to effectively manage and monitor the IRIS cluster, ensuring seamless accessibility and maintaining the highest levels of security. 

In the following sections, we will delve into the technical details of our implementation, sharing the strategies and best practices we employed to overcome the complexities of managing a distributed IRIS environment on AWS EKS. Through this post, we aim to provide valuable insights and guidance to assist others facing similar challenges in their cloud migration journeys with InterSystems technologies.

Configuration Summary Our configuration capitalized on the scalability of AWS EKS, the automation of the InterSystems Kubernetes Operator (IKO) 3.6, and the routing proficiency of AWS ALB. This combination provided a robust and agile environment for our ERP system's web services.

Mirroring Configuration and Management Access We deployed mirrored IRIS data servers to ensure high availability. These servers, alongside a single application server, were each equipped with a Web Gateway sidecar pod. Establishing secure access to these management portals was paramount, achieved by meticulous network and service configuration.

Detailed Configuration Steps

Initial Deployment with IKO:

  • We leveraged IKO 3.6, we deployed the IRIS instances, ensuring they adhered to our high-availability requirements.

Web Gateway Management Configuration:

  • We create server access profiles within the Web Gateway Management interface. These profiles, named data00 and data01, were crucial in establishing direct and secure connectivity to the respective Web Gateway sidecar pods associated with each IRIS data server.
  • To achieve precise routing of incoming traffic to the appropriate Web Gateway, we utilized the DNS pod names of the IRIS data servers. By configuring the server access profiles with the fully qualified DNS pod names, such as iris-svc.app.data-0.svc.cluster.local and iris-svc.app.data-1.svc.cluster.local, we ensured that requests were accurately directed to the designated Web Gateway sidecar pods.

https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

 

IRIS Terminal Commands:

  • To align the CSP settings with the newly created server profiles, we executed the following commands in the IRIS terminal:
    • d $System.CSP.SetConfig("CSPConfigName","data00") # on data00
    • d $System.CSP.SetConfig("CSPConfigName","data01") # on data01

https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI...

NGINX Configuration:

  • The NGINX configuration was updated to respond to /data00 and /data01 paths, followed by creating Kubernetes services and ingress resources that interfaced with the AWS ALB, completing our secure and unified access solution.

Creating Kubernetes Services:

  • I initiated the setup by creating Kubernetes services for the IRIS data servers and the SAM:

 

Ingress Resource Definition:

  • Next, I defined the ingress resources, which route traffic to the appropriate paths using annotations to secure and manage access.

Explanations for the Annotations in the Ingress YAML Configuration:

  • alb.ingress.kubernetes.io/scheme: internal
    • Specifies that the Application Load Balancer should be internal, not accessible from the internet.
    • This ensures that the ALB is only reachable within the private network and not exposed publicly.
  • alb.ingress.kubernetes.io/subnets: subnet-internal, subnet-internal
    • Specifies the subnets where the Application Load Balancer should be provisioned.
    • In this case, the ALB will be deployed in the specified internal subnets, ensuring it is not accessible from the public internet.
  • alb.ingress.kubernetes.io/target-type: ip
    • Specifies that the target type for the Application Load Balancer should be IP-based.
    • This means that the ALB will route traffic directly to the IP addresses of the pods, rather than using instance IDs or other target types.
  • alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
    • Enables sticky sessions (session affinity) for the target group.
    • When enabled, the ALB will ensure that requests from the same client are consistently routed to the same target pod, maintaining session persistence.
  • alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    • Specifies the ports and protocols that the Application Load Balancer should listen on.
    • In this case, the ALB is configured to listen for HTTPS traffic on port 443.
  • alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:il-
    • Specifies the Amazon Resource Name (ARN) of the SSL/TLS certificate to use for HTTPS traffic.
    • The ARN points to a certificate stored in AWS Certificate Manager (ACM), which will be used to terminate SSL/TLS connections at the ALB.

These annotations provide fine-grained control over the behavior and configuration of the AWS Application Load Balancer when used as an ingress controller in a Kubernetes cluster. They allow you to customize the ALB's networking, security, and routing settings to suit your specific requirements.

After configuring the NGINX with location settings to respond to the paths for our data servers, the final step was to extend this setup to include the SAM by defining its service and adding the route in the ingress file.

Security Considerations: We meticulously aligned our approach with cloud security best practices, particularly the principle of least privilege, ensuring that only necessary access rights are granted to perform a task.

DATA00:

 

DATA01:

SAM:

Conclusion: 

This article shared our journey of migrating our application to the cloud using InterSystems IRIS on AWS EKS, focusing on creating a centralized, accessible, and secure management solution for the IRIS cluster. By leveraging security best practices and innovative approaches, we achieved a scalable and highly available architecture.

We hope that the insights and techniques shared in this article prove valuable to those embarking on their own cloud migration projects with InterSystems IRIS. If you apply these concepts to your work, we'd be interested to learn about your experiences and any lessons you discover throughout the process

3 Comments
Discussion (3)1
Connectez-vous ou inscrivez-vous pour continuer
Article
· Mars 11, 2024 8m de lecture

Generating meaningful test data using Gemini

We all know that having a set of proper test data before deploying an application to production is crucial for ensuring its reliability and performance. It allows to simulate real-world scenarios and identify potential issues or bugs before they impact end-users. Moreover, testing with representative data sets allows to optimize performance, identify bottlenecks, and fine-tune algorithms or processes as needed. Ultimately, having a comprehensive set of test data helps to deliver a higher quality product, reducing the likelihood of post-production issues and enhancing the overall user experience. 

In this article, let's look at how one can use generative AI, namely Gemini by Google, to generate (hopefully) meaningful data for the properties of multiple objects. To do this, I will use the RESTful service to generate data in a JSON format and then use the received data to create objects.

This leads to an obvious question: why not use the methods from %Library.PopulateUtils to generate all the data? Well, the answer is quite obvious as well if you've seen the list of methods of the class - there aren't many methods that generate meaningful data.

So, let's get to it.

Since I'll be using the Gemini API, I will need to generate the API key first since I don't have it beforehand. To do this, just open aistudio.google.com/app/apikey and click on Create API key.

and create an API key in a new project

After this is done, you just need to write a REST client to get and transform data and come up with a query string to a Gemini AI. Easy peasy 😁

For the ease of this example, let's work with the following simple class

Class Restaurant.Dish Extends (%Persistent, %JSON.Adaptor)
{
Property Name As %String;
Property Description As %String(MAXLEN = 1000);
Property Category As %String;
Property Price As %Float;
Property Currency As %String;
Property Calories As %Integer;
}

In general, it would be really simple to use the built-in %Populate mechanism and be done with it. But in bigger projects you will get a lot of properties which are not so easily automatically populated with meaningful data.

Anyway, now that we have the class, let's think about the wording of a query to Gemini. Let's say we write the following query:

{"contents": [{
    "parts":[{
      "text": "Write a json object that contains a field Dish which is an array of 10 elements. Each element contains Name, Description, Category, Price, Currency, Calories of the Restaurant Dish."}]}]}

If we send this request to https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=APIKEY we will get something like:

 
Spoiler

Already not bad. Not bad at all! Now that I have the wording of my query, I need to generate it as automatically as possible, call it and process the result.

Next step - generating the query. Using the very useful article on how to get the list of properties of a class we can generate automatically most of the query.

ClassMethod GenerateClassDesc(classname As %String) As %String
{
    set cls=##class(%Dictionary.CompiledClass).%OpenId(classname,,.status)
    set x=cls.Properties
    set profprop = $lb()
    for i=3:1:x.Count() {
        set prop=x.GetAt(i)
        set $list(profprop, i-2) = prop.Name        
    }
    quit $listtostring(profprop, ", ")
}

ClassMethod GenerateQuery(qty As %Numeric) As %String [ Language = objectscript ]
{
    set classname = ..%ClassName(1)
    set str = "Write a json object that contains a field "_$piece(classname, ".", 2)_
        " which is an array of "_qty_" elements. Each element contains "_
        ..GenerateClassDesc(classname)_" of a "_$translate(classname, ".", " ")_". "
    quit str
}

When dealing with complex relationships between classes it may be easier to use the object constructor to link different objects together or to use a built-in mechanism of %Library.Ppulate.

Following step is to call the Gemini RESTful service and process the resulting JSON.

ClassMethod CallService() As %String
{
 Set request = ..GetLink()
 set query = "{""contents"": [{""parts"":[{""text"": """_..GenerateQuery(20)_"""}]}]}"
 do request.EntityBody.Write(query)
 set request.ContentType = "application/json"
 set sc = request.Post("v1beta/models/gemini-pro:generateContent?key=<YOUR KEY HERE>")
 if $$$ISOK(sc) {
    Set response = request.HttpResponse.Data.Read()	 
    set p = ##class(%DynamicObject).%FromJSON(response)
    set iter = p.candidates.%GetIterator()
    do iter.%GetNext(.key, .value, .type ) 
    set iter = value.content.parts.%GetIterator()
    do iter.%GetNext(.key, .value, .type )
    set obj = ##class(%DynamicObject).%FromJSON($Extract(value.text,8,*-3))
    
    set dishes = obj.Dish
    set iter = dishes.%GetIterator()
    while iter.%GetNext(.key, .value, .type ) {
        set dish = ##class(Restaurant.Dish).%New()
        set sc = dish.%JSONImport(value.%ToJSON())
        set sc = dish.%Save()
    }    
 }
}

Of course, since it's just an example, don't forget to add status checks where necessary.

Now, when I run it, I get a pretty impressive result in my database. Let's run a SQL query to see the data.

The description and category correspond to the name of the dish. Moreover, prices and calories look correct as well. Which means that I actually get a database, filled with reasonably real looking data. And the results of the queries that I'm going to run are going to resemble the real results.

Of course, a huge drawback of this approach is the necessity of writing a query to a generative AI and the fact that it takes time to generate the result. But the actual data may be worth it. Anyway, it is for you to decide 😉

 
P.S.

P.P.S. The first image is how Gemini imagines the "AI that writes a program to create test data" 😆

4 Comments
Discussion (4)3
Connectez-vous ou inscrivez-vous pour continuer
Article
· Mars 11, 2024 3m de lecture

Deploying IRIS For Health on OpenShift

In case you're planning on deploying IRIS For Health, or any of our containerized products, via the IKO on OpenShift, I wanted to share some of the hurdles we had to overcome.

As with any IKO based installation, we first need to deploy the IKO itself. However we were getting this error:

Warning FailedCreate 75s (x16 over 3m59s) replicaset-controller Error creating: pods "intersystems-iris-operator-amd-f6757dcc-" is forbidden: unable to validate against any security context constraint:

proceeded by a list of all the security context constraints (SCCs) it could not validate against.

If you're like me, you may be surprised to see such an error when deploying in Kubernetes, because a security context constraint is not a Kubernetes object. This comes from the OpenShift universe, which extends the regular Kubernetes definition (read more about that here). 

What happens is that when we install the IKO via helm (see more on how to do that here) we create a service account.

[ User accounts are for humans. Service accounts are for application processes - Kubernetes docs].

This service account is put in charge of creating objects, such as the IKO pod. However, it fails.

OpenShift has a wide array of security permissions that can be limited, and one way to do this is via the security context constraint. 

What we needed to do was to create the following SecurityContextConstraint:

# Create SCC for ISC resources
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
  name: iris-scc
  namespace: <iris namespace>
allowPrivilegedContainer: false
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
fsGroup:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowHostDirVolumePlugin: false
allowHostIPC: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
users:
  - system:serviceaccount:<iris namespace>:intersystems-iris-operator-amd

This gives access to the intersystems-iris-operator-amd service account to create objects by allowing it to validate against the iris-scc.

Next is to deploy the IrisCluster itself (more on that here). But this was failing too, because we needed to give the default service account access to the anyuid SCC, allowing our containers to run as any user (more specifically, we need to let the irisowner/51773 user run the containers!). We do this as follows:

ocm adm policy add-scc-to-user anyuid -z default -n <iris namespace>

We then create a rolebinding for the Admin role to the service account intersystems-iris-operator-amd, giving it the ability to create and monitor in the namespace. In OpenShift one can do this via the console, or as explained in kubectl create rolebinding.

One very last thing to note is that you may notice the container getting a SIGKILL, as is shown in the IRIS Messages Log:

Initializing IRIS, please wait...
Merging IRIS, please wait...
Starting IRIS
Startup aborted.
Unexpected failure: The target process received a termination signal 9.
Operation aborted.
[ERROR] Command "iris start IRIS quietly" exited with status 256

This could be due to Resource Quotas and Limit Ranges. Take into account that these exist at both the pod level and the container level.

Hope this helps and happy deploying!

P.S.

You may have noted that in the values.yaml of the Helm chart, there is this snippet:

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name:

You can actually change edit this and use a service account that already exists. For example:

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: false
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name: myExistingServiceAccount

Note that this is not a one size fits all, but it could help you if you're deploying on a strict system where you cannot create service accounts, but can use some that already exist.

Discussion (0)1
Connectez-vous ou inscrivez-vous pour continuer