Nouvelle publication

Rechercher

Annonce
· Mai 9

[Video] Data Fabric in Practice: Lessons Learned

Hi Community,

Enjoy the new video on InterSystems Developers YouTube:

⏯ Data Fabric in Practice: Lessons Learned @ Global Summit 2024

The Data Fabric is a powerful information architecture that is seeing rapid adoption across enterprises, not least InterSystems customers. In this session, we'll present the concept and goals of a Data Fabric, explain why InterSystems IRIS is a great technology to implement yours, and hear from University of Utah Health about their journey to deploy pattern in their complex environment.

Presenters:
🗣 @Jess Jowdy, Manager, Healthcare Sales Engineering, InterSystems
🗣 @Benjamin De Boe, Manager, Analytics Product Management, InterSystems
🗣 Reed Barney, Senior Director, Enterprise Data Warehouse, University of Utah Health  

Subscribe to our YouTube channel to stay up to date!

Discussion (0)1
Connectez-vous ou inscrivez-vous pour continuer
Article
· Mai 9 3m de lecture

Building the UI by Prompting vs InterSystems IRIS Backend: Lovable, Spec First and REST API

Hi developers!

Observing the avalanche of AI-driven and vibe-coding developer tools that have been appearing lately almost every month with more and more exciting dev features, I was puzzled whether it is possible to leverage it with InterSystems IRIS. At least to build a frontend. And the answer - yes! At least with the approach I followed.

Here is my recipe to prompt the UI vs InterSystems IRIS Backend:

  1. Have the REST API on the IRIS side, which reflects some Open API (swagger) spec.
  2. Generate the UI with any vibe-coding tool (e.g., Lovable) and point the UI to the REST API endpoint.
  3. Profit!

Here is the result of my own exercise - a 100% prompted UI vs IRIS REST API that allows to list, create, update delete entries of a persistent class (Open Exchange, frontend source, video):

What is the recipe in detail?

How to get the Open API (Swagger) spec vs IRIS backend?

I took the template with persistent class dc.Person, which contains a few simple fields: Name, Surname, Company, Age, etc.

I thought that ChatGPT could generate swagger spec but it would be shy to do it vs ObjectScript (perhaps it can now) so I generated DDL vs the class in IRIS terminal:

Do $SYSTEM.SQL.Schema.ExportDDL("dc_Sample","*","/home/irisowner/dev/data/ddl.sql")

and fed ChatGPT with the DDL + prompt:

Please create an Open API spec in JSON version 2.0 vs the following DDL which will allow to get all the entries and individual ones, create, update, delete entries. Also, add _spec endpoint with OperationId GetSpec. Please provide meaningful operation id's for all the endpoints. The DDL: CREATE TABLE dc_Sample.Person( %PUBLICROWID, Company VARCHAR(50), DOB DATE, Name VARCHAR(-1), Phone VARCHAR(-1), Title VARCHAR(50) ) GO CREATE INDEX DOBIndex ON dc_Sample.Person(DOB) GO

And it worked quite well - here is the result.

Then I used %REST command-line tool to generate rest-api backend classes.

After that, I implemented IRIS REST backend logic with ObjectScript for GET, PUT, POST, and DELETE calls. Mostly manually ;) with some help from the Co-pilot in VSCode.

Tested the REST API manually with swagger-ui and after everything was ready to build the UI:

The UI was prompted with Lovable.dev tool which I fed with the following prompt:

Please create a modern, convenient UI vs. the following open API spec, which will allow list, create, update, and delete persons { "swagger":"2.0", "info":{ "title":"Person API," "version":"1.0.0" }, .... the whole swagger spec.

After it was built and tested (manually), I asked Lovable to direct it vs the REST API endpoint in IRIS. First, locally in Docker, and after testing and fixing a few bugs (via prompts), the result was deployed.

A few caveats and lessons learned:

  • REST API security on the IRIS side is not always clear from scratch (mostly because of CORS-related topics), e.g. I needed to add a special cors.cls and modify swagger spec manually to make CORS work. 
  • Swagger documentation is not working automatically for docker in IRIS but it is fixable with special _spec endpoint and some objectscript code lines.
  • Swagger spec for IRIS should be 2.0, not the latest 3.1.

Other than that, this approach is quite an effective way for a backend IRIS developer to build comprehensive full-stack application prototypes in a very short time with zero knowledge of front-end development.

Share what you think.  And what is your "vibe" coding development experience with IRIS?

Here is the video walkthrough:

Discussion (0)1
Connectez-vous ou inscrivez-vous pour continuer
Question
· Mai 9

JSON - is key defined?

I am running into an issue where a JSON response is missing keys...

{
    "count": 0,
    "pageInformation": {
        "offset": 0,
        "size": 10
    },
    "items": []
}

Within the JSON response I am only looking for two fields portalUrl and portalId. I have tried using $LENGTH and %IsDefined to check if they are valued but neither work. 

  If ((tHttpResponseIsObject) && ($ZCONVERT(tHttpResponseContentType,"L") [ "application/json"))
  {
    set responseData = {}.%FromJSON(tHTTPResponse.Data)
    set pResponse = ##class(osuwmc.COM.Response.StringResponse).%New()
    if %IsDefined(responseData.items.%Get(0).portalUrl) {
      set pResponse.COMPortalURL = responseData.items.%Get(0).portalUrl
    } else {
      set pResponse.COMPortalURL = ""
    }
    if %IsDefined(responseData.items.%Get(0).pureId) {
      set pResponse.COMPureID = responseData.items.%Get(0).pureId
    } else {
      set pResponse.COMPureID = ""
    }
  }

Can anyone tell me how I can check to see if a key exists within a JSON response?

2 Comments
Discussion (2)2
Connectez-vous ou inscrivez-vous pour continuer
Article
· Mai 9 10m de lecture

Creating a cross-regional DR Async with IKO

Regardless of whether an instance of IRIS is in the cloud or not, high availability and disaster recovery are always important considerations. While IKO already allows for the use of NodeSelectors to enforce the scheduling of IRISCluster nodes across multiple zones, multi-region k8s clusters are generally not recommended or even supported in the major CSP's managed Kubernetes solutions. However, when discussing HA and DR for IRIS, we may want to have an async member in a completely separate region, or even in a different cloud provider altogether. With additional options added to IKO in the development of version 3.8 (specifically 3.8.4+), it's now possible to create this kind of multi-region HA architecture.

In this post we will:

  • Create two different Kubernetes clusters in entirely separate Regions.
  • Configure both Kubernetes clusters to allow for bi-directional communication
  • Configure internal Loadbalancers to connect our IRISCluster nodes
  • Create and mirror pair and DR Async IKO IRISCluster, one in each Kubernetes cluster
  • Verify the Mirror Status of each node and prove that a simple global is accessible on the DR Async node

Previous Work

It should be noted that we have previously experimented with several tools for creating remote DR Async members through IKO and with K8s.

1) We've attempted cross--cluster communication schemes using Calico/Tigera and Istio. In both cases the solutions were overly complicated and did not end up meeting our success criteria 

2) We've used external LoadBalancer to expose mirror members to each other over the internet. While effective, the solution we will cover here allows for much greater security, and doesn't expose our IRIS instances to the internet at large in any capacity.

 


Prerequisites 

In order to replicate the following work, you will need the following tools

  • AWS CLI
  • EKCTL
  • Helm

Creating and Installing Dependencies on EKS Kubernetes Clusters

It's fairly straightforward to create EKS Clusters through the EKSCTL tool.

  • First create two files, cluster1.yaml and cluster2.yaml

cluster1.yaml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: sferguso-cross1
  region: us-east-1
  version: "1.27"
kubernetesNetworkConfig:
  ipFamily: "IPv4"
  serviceIPv4CIDR: "10.100.0.0/16"
vpc:
  cidr: "192.168.0.0/16"
nodeGroups:
  - name: ng-1
    instanceType: m5.large
    desiredCapacity: 5
    volumeSize: 80

cluster2.yaml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: sferguso-cross2
  region: us-west-2
  version: "1.27"
kubernetesNetworkConfig:
  ipFamily: "IPv4"
  serviceIPv4CIDR: "10.200.0.0/16"
vpc:
  cidr: "172.16.0.0/16"
nodeGroups:
  - name: ng-1
    instanceType: m5.large
    desiredCapacity: 5
    volumeSize: 80
  • Secondly, use EKSCTL to create the clusters. (This will take some time)
eksctl create cluster -f cluster1.yaml
eksctl create cluster -f cluster2.yaml

Note in the above configurations, that the regions are us-east-1 and us-west-2. This will result in each Kubernetes cluster spanning 2 Availability Zones by default (which generally are their own independent data centers), but in entirely separate regions for the greatest possible fault tolerance.

Note as well that the CIDRS both for the kubernetesNetworkConfigs and the VPCs are all unique. This allows us to be certain exactly where traffic is originating from. At the very least, the VPC CIDRs must be different to ensure no overlap for later steps.

  • Next we need to install our Storage CSI Driver.

If your AWS permissions are wide enough, its possible to simply run the following commands to install EBS onto the two k8s clusters 

eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=sferguso-cross1 --approve
eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster sferguso-cross1 --role-name AmazonEKS_EBS_CSI_DriverRole_sferguso-cross1 --role-only --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve
eksctl create addon --name aws-ebs-csi-driver --cluster sferguso-cross1 --service-account-role-arn arn:aws:iam::<Your_Account_ID>:role/AmazonEKS_EBS_CSI_DriverRole_sferguso-cross1 --force
 
eksctl utils associate-iam-oidc-provider --region=us-west-2 --cluster=sferguso-cross2 --approve
eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster sferguso-cross2 --region us-west-2 --role-name AmazonEKS_EBS_CSI_DriverRole_sferguso-cross2 --role-only --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve
eksctl create addon --name aws-ebs-csi-driver --cluster sferguso-cross2 --region=us-west-2 --service-account-role-arn arn:aws:iam::<Your_Account_ID>:role/AmazonEKS_EBS_CSI_DriverRole_sferguso-cross2 --force

If you do not have IAM permissions for these commands (true for all ISC AWS accounts), you can create the IAM Roles by hand as well. This requires you to make two new Roles (AmazonEKS_EBS_CSI_DriverRole_sferguso-cross1 and AmazonEKS_EBS_CSI_DriverRole_sferguso-cross2) in the IAM Console from the AWS Managed AmazonEBSCSIDriverPolicy and a policy that looks like the following:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<Your_Account_ID>:oidc-provider/oidc.eks.<region>.amazonaws.com/id/<EKS_OIDC>"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.eks.<region>.amazonaws.com/id/<EKS_OIDC>:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa",
          "oidc.eks.<region>.amazonaws.com/id/<EKS_OIDC>:aud": "sts.amazonaws.com"
        }
      }
    }
  ]
}

Note that you can find the <EKS_OIDC> in the AWS Console in the Overview tab of your EKS Cluster, and <Your_Account_ID> in the top right dropdown of the Console.

  • Then we must install IKO

Follow the instructions at https://docs.intersystems.com/components/csp/docbook/DocBook.UI.Page.cls?KEY=AIKO36#AIKO36_install

Important: Before running helm install, navigate to the IKO chart values.yaml and make sure to set FQDN to false. The rest of the exercise will not work without this setting

useFQDN: false


Create a Peering Connection between the two EKS created VPCs

Follow instructions at https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html#same-account-different-region
 

Additionally follow the instructions at https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html to create routes for traffic to move through the peering connection

This should result in a ingress route for every RouteTable like the following:

${cidr_block_othercluster} ${vpc_peering_id}

 


Create Internal LoadBalancers 

Now that the two EKS Clusters are connected, we need to expose K8s services that will associate themselves with our IKO defined IRISClusters. These internal load balancers have DNS names that only resolve within either of the VPCs created by EKS.
 

In the Cluster you will use to create an IKO mirror pair apply the following:

mirror-svcs.yaml

apiVersion: v1
kind: Service
metadata:
  name: primary-svc
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - port: 1972
    protocol: TCP
    name: superserver
  - port: 2188
    protocol: TCP
    name: iscagent
  selector:
    statefulset.kubernetes.io/pod-name: crossover-data-0-0
---
apiVersion: v1
kind: Service
metadata:
  name: backup-svc
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - port: 1972
    protocol: TCP
    name: superserver
  - port: 2188
    protocol: TCP
    name: iscagent
  selector:
    statefulset.kubernetes.io/pod-name: crossover-data-0-1

In the Cluster you will use to create an IKO async member apply the following:

async-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: async-svc
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - port: 1972
    protocol: TCP
    name: superserver
  - port: 2188
    protocol: TCP
    name: iscagent
  selector:
    statefulset.kubernetes.io/pod-name: crossover-data-0-100

You can apply these services with the following command (note that I've chosen my us-east-1 cluster to contain my mirror pair, and us-west-2 to contain my async dr member)

kubectl --context <east> apply -f mirror-svcs.yaml
kubectl --context <west> apply -f async-svc.yaml
  • Once These Internal LoadBalancer services come up, create ExternalName services that map cross-cluster.

In the Cluster you will use to create an IKO mirror pair apply the following:

mirror-cross-svc.yaml


apiVersion: v1
kind: Service
metadata:
  name: crossover-data-0-100
spec:
  type: ExternalName
  externalName: "a74fcb8ce93d94781ae4e0ea9d4d1c41-a8ec6876e5f7d747.elb.us-west-2.amazonaws.com" # Example Host Name of async-svc
  ports:
  - port: 1972
    name: "superserver"
  - port: 2188
    name: "iscagent"

In the Cluster you will use to create an IKO async member apply the following:

async-cross-svcs.yaml


apiVersion: v1
kind: Service
metadata:
  name: crossover-data-0-1
spec:
  type: ExternalName
  externalName: "a59d4f5a0c102479a9d48854da1f0d87-b27eeccb1044a9da.elb.us-east-1.amazonaws.com" # Example Host Name of backup-svc
  ports:
  - port: 1972
    name: "superserver"
  - port: 2188
    name: "iscagent"
---
apiVersion: v1
kind: Service
metadata:
  name: crossover-data-0-0
spec:
  type: ExternalName
  externalName: "a59d4f5a0c102479a9d48854da1f0d87-b27eeccb1044a9da.elb.us-east-1.amazonaws.com" # Example Host Name of primary-svc
  ports:
  - port: 1972
    name: "superserver"
  - port: 2188
    name: "iscagent"

Again, you can apply these with the following style command:

kubectl --context <east> apply -f mirror-cross-svc.yaml
kubectl --context <west> apply -f async-cross-svcs.yaml

    Once all these service are running, we're ready to create the matching IKO IRISClusters. Note that the ExternalName services use the expected naming scheme of the IRISClusters we'll make in a second. This is why disabling FQDN is so important when we initially installed  IKO, as without that setting off, IKO will not create a routable mirroring configuration for the cluster using these services. In this way, the Primary and Failover member will route to the Async member over the ExternalName service (which references the associated Internal Loadbalancer in the other VPC) and vice-versa.


    Create Mirrored IRISCluster and Async IRISCluster

    For the following I created an AWS ECR (sferguso-cross) to push my IRIS images too. EKS automatically has pull access to any ECR in the same account. You can either do the same, or use an IKO ImagePullSecret to source IRIS images from containers.intersystems.com
     

    In the Cluster for the IKO mirror pair apply the following:

    iris-mirror.yaml

    
    apiVersion: intersystems.com/v1alpha1
    kind: IrisCluster
    metadata:
      name: crossover
    spec:
      licenseKeySecret:
        name: iris-key-secret
      configSource:
        name: my-iriscluster-config
      tls:
        common:
          secret:
            secretName: common-certs
      topology:
        data:
          image: <Your_Account_ID>.dkr.ecr.us-east-1.amazonaws.com/sferguso-cross:iris-2024.1.0L.217.0
          compatibilityVersion: "2024.1.0"
          mirrored: true

    In the Cluster for the Async Member apply the following:

    iris-async.yaml

    
    apiVersion: intersystems.com/v1alpha1
    kind: IrisCluster
    metadata:
      name: crossover
    spec:
      licenseKeySecret:
        name: iris-key-secret
      configSource:
        name: my-iriscluster-config
      tls:
        common:
          secret:
            secretName: common-certs
      topology:
        data:
          image: <Your_Account_ID>.dkr.ecr.us-east-1.amazonaws.com/sferguso-cross:iris-2024.1.0L.217.0
          compatibilityVersion: "2024.1.0"
          mirrored: true
          mirrorMap: drasync
          start: 100

    And as usual, apply these to the clusters through kubectl

    kubectl --context <east> apply -f iris-mirror.yaml
    kubectl --context <west> apply -f iris-async.yaml

    Verification

    In the Cluster for the IKO mirror pair
     

    $ kubectl exec crossover-data-0-0 -- iris qlist
    IRIS^/usr/irissys^2024.1.0L.208.0^running, since Thu Jan 18 16:43:48 2024^iris.cpf^1972^0^0^ok^IRIS^Failover^Primary^/irissys/data/IRIS/^1972
    
    $ kubectl exec crossover-data-0-1 -- iris qlist
    IRIS^/usr/irissys^2024.1.0L.208.0^running, since Thu Jan 18 16:44:27 2024^iris.cpf^1972^0^0^ok^IRIS^Failover^Backup^/irissys/data/IRIS/^1972
    
    $ kubectl exec -it crossover-data-0-0 -- iris session IRIS -UIRISCLUSTER
    IRISCLUSTER>s ^result="Success!"
    $ kubectl exec -it crossover-data-0-1 -- iris session IRIS -UIRISCLUSTER
    IRISCLUSTER>w ^result
    Success!

    In the Cluster for the Async Member

    $ kubectl exec crossover-data-0-100 -- iris qlist
    IRIS^/usr/irissys^2024.1.0L.208.0^running, since Thu Jan 18 17:07:23 2024^iris.cpf^1972^0^0^ok^IRIS^Disaster Recovery^Connected^/irissys/data/IRIS/^1972
    
    $ kubectl exec -it crossover-data-0-100 -- iris session IRIS -UIRISCLUSTER
    IRISCLUSTER>w ^result
    Success!

    Huzzah! We officially have an Async DR node that is running on the opposite coast from its other mirror members.

    Discussion (0)1
    Connectez-vous ou inscrivez-vous pour continuer
    Article
    · Mai 9 2m de lecture

    Mínima pegada do container IRIS

    Às vezes, clientes precisam de uma pequena instância IRIS para fazer algo na nuvem e desligá-la, ou precisam de centenas de contêineres (ou seja, um por usuário final ou um por interface) com pequenas cargas de trabalho. Este exercício surgiu para ver quão pequena uma instância IRIS poderia ser. Para este exercício, focamos em qual é a menor quantidade de memória que podemos configurar para uma instância IRIS. Você conhece todos os parâmetros que afetam a memória alocada pelo IRIS?

    Configurações de Memória

    Estes são os diferentes grupos que afetam a alocação de memória pelo IRIS e seus parâmetros correspondentes:

    • Global Buffers (32 MB) => O mínimo parece ser 32 MB para bancos de dados de 8k
    • Routine Buffers (26MB) => O mínimo teórico da documentação é de 430 buffers x 64k por buffer ≈ 26 MB e na prática é 26 MB
    • Journal Buffers (16 MB) => Em teoria e na prática, o mínimo para um sistema Unicode é 16 MB (8 MB para 8-bit)
    • Buffer descriptors (3 MB) => Parece ser 3 MB, não é possível ajustar
    • gmheap (16 MB) =>  O mínimo é de 256 páginas x 64k por página = 16 MB (configurado em bytes no CPF = 16384)
    • ECP (1 MB) => Existem alguns parâmetros no CPF que controlam isso. MaxServerConn e MaxServers. Mesmo com MaxServerConn=0 e MaxServers=0, o mínimo é 1 MB porque o IRIS aloca memória para essas estruturas independentemente de você usar ECP ou não. Com MaxServerConn=1 e MaxServers=2, o padrão é 5 MB.
    • Miscellaneous (15 MB) => Pelos meus testes, o mínimo parece ser 15 MB. Não consegui ir abaixo disso.

    Com todas essas configurações definidas dessa forma, os parâmetros do CPF ficariam assim:

    globals=0,0,32,0,0,0
    routines=0,0,0,0,0,26
    jrnbufs=16
    gmheap=16384
    MaxServerConn=0
    MaxServers=0

    Portanto, se você configurar o IRIS com esses parâmetros (eu usei o IRIS 2024.1) no arquivo CPF, pode-se esperar um total de 111 MB como a quantidade mínima de memória a ser alocada pelo IRIS na inicialização (processos de usuário à parte). Você pode ver a memória alocada pelo IRIS no arquivo messages.log:

     

     

    Bem, eu sei, deve haver algum arredondamento acontecendo porque se você somar todos os números individuais, obtém 109 MB, mas no messages.log é reportado como 111 MB.

    Durante este processo, também descobrimos que os contêineres IRIS não conseguiam alocar Huge Pages. Isso foi reportado e corrigido em versões subsequentes pela InterSystems.

    A propósito, este artigo também pode servir como um artigo geral para aprender sobre os diferentes parâmetros de memória que se pode ajustar dentro do IRIS para obter melhor desempenho.

    Espero que gostem!

    Discussion (0)1
    Connectez-vous ou inscrivez-vous pour continuer