Nouvelle publication

Rechercher

Question
· Mai 2, 2024

Compact & Truncate

Hi Guys,

 

I never ran a Compact & truncate process before and we do have a 4.6TB database with 1.9TB free space available that I would like to recover 

 

 

but when trying to compact first it did ask me to target in MBs and not sure what this is, does it mean that I need enough space to move all my 4.6TB database to ? and given that system is running live does the Compact or Truncate process stop or affect our system from been used from our clients and any idea how long such process would take to finish (minutes or hours) ?  

 

 

 

Thanks

8 Comments
Discussion (8)2
Connectez-vous ou inscrivez-vous pour continuer
Question
· Mai 2, 2024

shrinking .dat file after killing a Global

Hi Guys,

 

My Cache.dat has a expanded a lot and some globals occupy a lot of disc space with just junk data so I would like to kill some globals and shrink the cache.dat file because I'm running out of disk space?
So is there a way to do that without having to do it through the Truncate & Compact databases?

 

Thanks  

4 Comments
Discussion (4)3
Connectez-vous ou inscrivez-vous pour continuer
Annonce
· Mai 2, 2024

InterSystems Open Exchange Applications Digest, April 2024

Hello and welcome to the April 2024 Open Exchange Newsletter.
General Stats:
10 new apps in April
699 downloads in April
936 applications all time
34,128 downloads all time
2,628 developers joined
New Applications
geo-vector-search
By Robert Cemper
hl7v2-to-kafka
By Sylvain Guilbaud
fhirpro
By Brandon Thomas
llama-iris
By Dmitry Maslennikov
langchain-iris
By Dmitry Maslennikov
foreign-tables
By Robert Cemper
deployed-code-template
By Evgeny Shvarov
Database Growth - Data Collection and Analysis
By Ariel Glikman
workshop-workflow
By Luis Angel Pérez Ramos
AriticleSimilarity
By xuanyou du
New Releases
Kano MDM by Ludmila Valerko
v2022.1.0
Some improvements and additions were made to the Kano MDM modules:
  • Functionality for monitoring the duration of the request approval process has been implemented. This allows to optimize processing time;
  • In the record linking process, the linkage reason output feature has been implemented, which helps to increase data transparency and the deduplication process;
  • Data quality control is extended to every step of the request route, which ensures continuous monitoring of data relevance and accuracy;
  • A feature for viewing duplicates of the "golden record" has been added, improving data management and integrity.
Regarding new functionalities, it is important to highlight:
  1. Significantly Expanded Data Import Functionality:
* The capability to import from TXT and XLSX files has been implemented, increasing flexibility in data management; * A feature for selecting data loading formats has been implemented, expanding control over imported data; * Scenarios for handling detected duplicates during import have been developed, improving the quality of incoming data; * The option to create and use import templates has been presented, optimizing the data loading process.
  1. Data Normalization:
* Normalization has been allocated to a separate submodule, significantly improving the data preparation process for deduplication; * Both simple and complex normalization rules have been implemented, ensuring deep data processing; * The ability to apply multiple normalization rules to a single parameter has been implemented, which enables more fine-tuning of the normalization process; * Specific normalization rules have been established for each data type, enhancing their normalization accuracy.
  1. “TreeList” component has been implemented, which provides the ability to switch to a hierarchical view of data, simplifying engagement with basic reference materials. This addition improves navigation and interaction with the system, allowing more efficient management of structured data.
These updates are aimed at increasing the efficiency and reliability of the Kano MDM data management system, ensuring a higher level of user satisfaction.
In addition to improvements and innovations in the system itself, a technology solution has been implemented. It enables Kano MDM to operate on the InterSystems IRIS for Health platform. InterSystems IRIS for Health is the world's first and only specialized data processing platform for the healthcare sector. The advancements include support for the healthcare data exchange standard FHIR.
v2023.1.0
The following modifications and additions have been implemented in the Kano MDM modules: * Additional information about the method of entity editing has been added to the change log. * All data quality-related information is now consolidated into a separate tab for each record, facilitating easy access and data management * Access to the "golden record" information for duplicate records has been provided, enhancing duplicate management and data integrity.
In the area of new functionality development, the following innovations should be noted:
  1. Image Processing:
* The capability to display image thumbnails in grids and cards has been introduced. * The functionality to display files via a link has been added.
  1. Data Import:
* A feature has been introduced allowing users to select their interaction with data quality during import, enabling them to define a data processing strategy upon upload. * The ability to upload large files directly from the server via a specified path has been developed, significantly expanding the potential for working with voluminous data.
  1. Work with the constructor:
* Implemented functionality to remove multiple fields and columns when customizing forms and tables, which greatly increases the flexibility and convenience of user interface configuration.
The implemented improvements and innovations are aimed at optimizing workflows and enhancing data processing quality, which, in turn, leads to improved system performance and end-user satisfaction.
Additionally, a significant technological achievement should be highlighted. Specific work methods have been developed, for integrating with the comprehensive solution InterSystems HealthShare for effective data exchange in the healthcare sector.
IRIS apiPub by Claudio Devecchi
v1.1.70
Configuration Improvements
Perftools IO Test Suite by Pran Mukherjee
v1.1.0
Added RanIO, a combination of RanRead and RanWrite. Also updated documentation appropriately.
iris-openai by Francisco López
v1.2.0

Release note

Added the following functionalities - Audio speach - Text moderation
Fixed issues - Dockers ports
MDX2JSON by Eduard Lebedyuk
v3.2.40
FIlters renamed to FILTERS
Git for Shared Development Environments by Timothy Leavitt
v2.3.1
Fixed:
* Support for git submodules in package manager-aware setting (#305) * Web UI's 'More ...' view shows longer branch names (#294) * Deletion of files in locked environment is now suppressed (#302) * Failed to import file VS Code popup no longer shows up after overwriting file on server once (#264) * Don't automatically stage files added to source control (#303) * Performance improvements (#269, #315) * Checkout of branches whose names contain slashes via Web UI no longer fails (#295) * Display other developer's username in Web UI's Workspace when hovering over the name of a file they changed (#304) * Incremental load PullEventHandler now handles file deletion (#299) * Incremental load PullEventHandler no longer returns a Success Status if an error was thrown during the pull process (#300) * CSP applications can now be added to Git successfully (#308)
Most downloaded
DeepSeeWeb
By Anton Gnibeda
MDX2JSON
By Eduard Lebedyuk
WebTerminal
By Nikita Savchenko
ssl-client
By Evgeny Shvarov
iris-cron-task
By Evgeny Shvarov
isc-json
By Timothy Leavitt
isc-rest
By Timothy Leavitt
isc-ipm-js
By Timothy Leavitt
April, 2024Month at a GlanceInterSystems Open Exchange
Discussion (0)1
Connectez-vous ou inscrivez-vous pour continuer
Article
· Mai 2, 2024 2m de lecture

Qué hacer con el error 5369: la clase está siendo compilada actualmente por proceso

Preguntas frecuentes de InterSystems

Este error ocurre cuando una instancia de la clase ya está abierta en el momento de la compilación.

Hay dos formas de abordar este problema:

  1. Terminar el proceso o aplicación que tiene la instancia abierta
  2. Opciones de compilación en el menú de compilación de Studio: marque el indicador de compilación "Compilar clases en uso" y compile.

Si desea determinar qué proceso está utilizando la clase, pruebe la rutina de ejemplo siguiente.

 

/// Test.mac
search(classname) public {
    Set pid=""
    Set pid=$order(^$Job(pid))
    While pid'="" {
        Do checkVars(pid,classname)
        Set pid=$Order(^$Job(pid))
    }
} checkVars(pid,string) {
    Set $ztrap="err"
    Set var="" 
    For {
        Set var=$zu(88,1,pid,var) q:var=""  
        Set val=$zu(88,2,pid,var)
        If val[string {
            Write !,pid,":",var," = ",val,!
        }
    }
    Quit err
    Set $ztrap=""
    Quit
}

Esta rutina de muestra busca las variables locales en los procesos del usuario para ver si usan la clase especificada.

≪Ejemplo de ejecución≫

USER>do search^Test("Test.Person")
 
2352:p1 = 1@Test.Person
6324:p2 = 2@Test.Person

*En este caso, los procesos con Pid=2352 y Pid=6324 están usando Test.Person.

Discussion (0)1
Connectez-vous ou inscrivez-vous pour continuer
Article
· Mai 2, 2024 3m de lecture

LinuxでODBC接続を行う方法

こちらの記事では、LinuxでODBC接続の設定を行う方法をご紹介します。


はじめに、Linuxのバージョンを確認します。

$ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="9.4 (Plow)"
:


1. yumパッケージのアップデートを行います

$ sudo yum update


2. unixODBCをインストールします

$ sudo yum install unixODBC

確認します

$ which odbcinst
/usr/bin/odbcinst
$ which isql
/usr/bin/isql
$ odbcinst -j
unixODBC 2.3.9
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /home/ec2-user/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8


3. IRISクライアントをインストールします

※Linuxのバージョンにあったインストーラを使用してください。

$ cd IRIS-2024.1.0.262.0-lnxrh9x64

$ sudo ./irisinstall_client
Your system type is 'Red Hat Enterprise Linux 9 (x64)'.
Enter a destination directory for client components.
Directory: /intersystems/iris
Directory '/intersystems/iris' does not exist.
Do you want to create it <Yes>?
Installation completed successfully


4. 構成ファイルの作成をします

※SYSTEM DATA SOURCES: /etc/odbc.ini に以下を追加します

[ODBC Data Sources]
InterSystemsODBC6435 = InterSystemsODBC6435

[InterSystemsODBC6435]
Description=InterSystems ODBC
Driver = /intersystems/iris/bin/libirisodbcur6435.so
Setup = /intersystems/iris/bin/libirisodbcur6435.so
Unicode SQLTypes = 1
Host=***.***.***.***
Namespace=USER
UID=_SYSTEM
Password=SYS
Port=1972
$ sudo vi /etc/odbc.ini 


5. 環境変数:ODBCINIの登録をします 

※すべてのユーザで使用できるよう、環境変数を永続化させます。

$ sudo vi /etc/profile

# ---- 以下を追加
export ODBCINI=/etc/odbc.ini

(設定を反映させるために、一度ログアウトして再度ログインします)

$ echo $ODBCINI
/etc/odbc.ini

※ご参考:初期化ファイルの名前と場所


6. IRISへのODBC接続確認をします

$ isql -v InterSystemsODBC6435
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT count(*) FROM INFORMATION_SCHEMA.TABLES
+---------------------+
| Aggregate_1         |
+---------------------+
| 429                 |
+---------------------+
SQLRowCount returns 1
1 rows fetched
SQL>

 

今回は、IRISクライアントインストールで試しましたが、ODBCドライバ単体のインストールも可能です。
詳細は以下のドキュメントをご覧ください。
UNIX® システムでの ODBC のインストールと検証


ODBCデータソースの定義についての詳細は、以下のドキュメントをご覧ください。
UNIX® での ODBC データ・ソースの定義
 

enlightened【ご参考】
PyODBC経由でIRISに接続するAWS Lambda関数を作成するまでの流れ
LinuxでJDBC接続を行う方法

Discussion (0)0
Connectez-vous ou inscrivez-vous pour continuer