Build: 23.10.30502. Release date: 30th October 2023.
Highlights
The 2023 November version is primarily a bugfix version, except for a number of new features related to the InMemory Database.
New Features
InMemory Database
InMemory DB | TargitDB now better supports sql without a FROM clause. E.g. select 1 where 1=1 | |
InMemory DB | There is an optimizer, that is turned off by default, that can be turned on by adding optimizer=true in targitdb.ini | This will analyze the tables, that are inner joined, to the first driving table, and do a swap, if there isn't a filter on the first main driving table, and the other table have 5x larger cardinality and more than 1 million rows. |
InMemory DB | Turn off the column sanity checks | You can turn off the column sanity checks that i. compare table row numbers with column row numbers ii. Run a check on the number of values vs number of rows In tiServer, this is done with the following targitdb.ini setting column_sanity_checks=false In tiImport, this is done with the following command set ColumnSanityChecks false Programmatically you can do it with the following command ColumnSanityChecks.enabled =false; |
InMemory DB | When Decompiling Columns if Values Count >> Rows Decompile into Value format. | This means, that if you decolumnize and columnize a corrupted table, it will be fixed. |
InMemory DB | When rolling over the targitdb.log files ( Every day ), the server will now wait 1 minute before closing the old log file, To give the server time to write any pending logs |
|
InMemory DB | Added new Method to tiServer and IPClient, to allow to test validity of SQL Statement against the Server. | Public bool parseWithParams(String dbName, String sql, Dictionary<String, ImpVar> parameters, out String errors) |
InMemory DB | Updated ALL Console.WriteLine calls to go through ConLogger Class, to help suppress Exceptions when writing to Console. | |
InMemory DB | More Parse Errors will try and show the location where line number and position where the parse error occurred. | |
InMemory DB | Added support for cacheable table value functions. | If there are multiple calls to the same function at the same time, they will now wait for the first call to finish and use that result. It will also cache the result of the function for 60s as well. This can be useful, if you want to bind a cube to a more dynamic multi step SQL sequence. If you have a dashboard that calls the same function mutliple times, it will reduce the load to one call. To make best use of this feature, you should be able to bind some of your Filter Widgets, to the parameters of the table value function. |
ETL Studio
ETL Studio | Allow empty datasets returned by JSON data sources | "AllowEmptyDatasets=TRUE|FALSE" connection string parameter is added to JSON data sources. It disables default behavior of throwing an error if there is no data. To ensure backward compatibility default value is FALSE; |
InMemory Data Drivers
Driver: Dynamics AX7 | Verified data driver is compatible with latest release of Microsoft Dynamics 365 for Finance and Operations (10.0.37) | |
Driver: JSON | Allow empty datasets returned by JSON data sources | "AllowEmptyDatasets=TRUE|FALSE" connection string parameter is added to JSON data sources. It disables default behavior of throwing an error if there is no data. To ensure backward compatibility default value is FALSE; |
Bug fixes
Server and Clients
Server | DAX: Invisible levels are being ignored, causing "List index out of bounds" errors | |
Server | Various logging improvements | - Whenever a user cancels a query, it will be logged as a critical error, potentially causing the server to send out alert e-mails to the administrator. The server should not send any mails when this occurs, and the error should not end up in the application log. - Logging when running as a Windows service on-premises was lacking - e.g. sometimes only the exception text was logged - no stack traces - If no request context was sent by the client (e.g. the request did not originate from a save report), there would always be a Request context: N/A logged. Now, it will not log that anymore |
Server | Requests hanging sometimes when querying through the Gateway using direct URI | In some cases when the load through the data gateway is high, requests may hang. |
Server | Deadlock can occur if logging to auxiliary database is enabled | |
Windows client | Defining dimensions after using query abort | If loading an dashboard is aborded, it's not possible to define dimensions in objects not loaded. |
Windows client | Autofilter and Dynamic Time will limit static time shown | If you have enabled Auto-filter and select Dynamic period with range and go into static time it will not show all periods but only the periods within the range. |
Windows client | Measure hidden will cause error for dimension text longer than cell | In crosstabs with hidden measures, the dimension text will wrap without adjusting row height to fit. |
Windows client | Measure axes will grow | The axis label is much higher than the highest number in the data set. When switching between Designer and Consumer the axis value will increase. |
Windows client | When a user who does not have access to a source in Scheduled Jobs clicks on it TARGIT crashes | |
Windows client | Switching between "Show data"/"Show charts" hides charts |
Anywhere
Anywhere | Crosstab with no visible data hangs when printing to PDF | |
Anywhere | Member selector marks members as expanded when they are not | |
Anywhere | Share in Anywhere does not include criteria / dynamic date origin in URL | |
Anywhere | Anywhere not showing MS KPI values | |
Anywhere | Case sensitive URL global/Global |
The error "Data at the root level is invalid. Line 1, position 1" occurs if URL anywhere/#vfs://Global is used instead of URL anywhere/#vfs://global.
|
Data Discovery
Data Discovery Back-end | Data sources fail with error "No metadata detected for data source" | It is possible to see this error for data sources that contain data source formats and have "Enable format generation out of process" setting enabled during restart of Data Discovery (typically) or reload of that data source (rarely). |
Data Discovery Back-end | Performance optimization | An improvement was implemented that positively affects Data Discovery's performance and CPU consumption. |
Plugin: Python | Cannot create Python data source | It was not possible to Save a data source however there were no validation errors shown on Python's data source's form |
Plugin: Python | Creation of Python data source fails with "Invalid syntax" error |
InMemory Database
InMemory Database Engine | No 'Empty Result' message displayed with DistinctCount measure | |
InMemory DB | Fixed a bug where count(someColumn ) on 0 rows was returning null instead of 0 | |
InMemory DB | Timers by default will now not show times less than 1ms | |
InMemory DB | When reading in targitdb Data , if values > 0 and rows = 0, it will now truncate the values, to try and repair the data | |
InMemory DB | Using the addConstantColumn function in irdbImport, now explicitly sets value count to 0, if the rowCount is 0 | |
InMemory DB | Executing a query, through the engine with a constant column, and zero rows, sets the value count explicitly to 0 | |
InMemory DB | Generating Constant Column with zero rows, now changes value count to 0 | |
InMemory DB | Added fix if you programatically called save on a database that was partially lazy loaded, it would corrupt the database. | |
InMemory DB | Fixed a few problems, with parse error messages, not being returned from the parser. ( Wrong Exception Type ) | |
InMemory DB | If a parse exception happens, it will no longer display a full stack trace in the log |
ETL Studio
Scheduler Service | Scheduler Service's RAM consumption increases after script's execution | This was a problem especially for relatively big resulting database files, leading to increased RAM consumption by TARGIT.SchedulerService.exe process |
InMemory Data Drivers
Online Data Provider: SharePoint | Error is thrown for SharePoint Online Data Provider when filter is applied | Error "400 bad request" can occur when certain filters are applied to a data source - incorrect column name was used. |
Comments
Please sign in to leave a comment.