Links

FAQ

Can't find the correct answer please ask our community in Slack #general_chat channel.

APP USAGE

What’s the difference between flows and jobs?
The flow is a streamlined process that transforms your data, beginning with the "Source" node and ending with the "Output" node. This is similar to a file in traditional applications.
You can create new flows, rename them, make duplicates, export them, and delete them. Each table displayed within the flow is simply a preview of the final result and is not saved.
A job is a process of executing the entire flow and saving the output in a file.
Which data transformation features does Tabula have?
We cover all SQL features and some more such as auto-flattening JSON.
How do I run the job?
When you have completed designing the flow and wish to save the result, click on the "Run" button located in the top right corner.
What is going on when I press Run button?
The computation process will begin, utilizing either cloud resources or your local computer, depending on the destination specified in the "Output" node.
Where does Tabula execute my flow?
The choice of resources for flow execution depends on the configuration of the "Source" and "Output" nodes. If Snowflake or PostgreSQL is connected, tabula will execute the flow using their resources.
If "Local File" is selected, execution will take place on your computer. To ensure proper job execution, the Tabula app must be running, and the computer should not be in sleep mode.
How does Tabula process data, does it push any data to the server?
No, the application only verifies your credentials during the initial launch.
Therefore, it is safe to use sensitive information in your flows as all data is stored on your local computer. We are actively working on implementing cloud features.
Does Tabula download my data from a warehouse or DB?
Yes, the app downloads a small sample of data to generate a table preview in design mode. This sample data is stored on your local computer where Tabula is running.
All the transformations execute utilizing warehouse or DB resources.
I have a large .CSV file that I cannot open with Excel. Could Tabula open it?
Yes, Tabula can open it no matter the size! The default sampling is 10,000 rows, but you can change it on the Explorer page
Can Tabula work with BigData, large files, or tables?
Yes, handling large databases is one of our main priorities. If you encounter unique problems when working with large databases, please report this issue in the #general_chat channel.
Could I create a custom function to reuse in the future?
Yes, we currently support scalar user-defined functions. To create them, please follow these instructions.
It is important to note that any changes should only be made if you are fully aware of what you are doing. Remember to only include functions that are already working in the body.
You can find custom-created functions in the directory provided. They are provided as examples. Open the *.json file in your preferred text editor (such as Notepad++ or Sublime), modify the parameters, and save with a new name. Restart Retable and use them.
  • For Mac OS: “/Users/%username%/Library/Application Support/tabula/latest/udfs”
  • For Windows: “C:\Users\%username%\AppData\Roaming\tabula\latest\udfs”
    Where "%username%" is the name of the user.

AI SUPPORT

How to use Magic Column?
  • Open your dataset in Catalog and expand to full screen with “Open in Explorer” button
  • Click “Add to flow”
  • On the toolbar find a node “GPT column” and add it to the canvas
  • Use the prompt window on the property grid to write your custom Prompt
  • If you need to send values from any column, mention it with
  • Click on the “send” icon inside the prompt window
Are GPT features free?
You have a limited number of trial calls. But you can add your own token generated by Open AI, and we won't limit your calls.
How use my own GPT token?
Go to the settings file and add your token to the field “token”. Restart the app.
  • For Mac OS: “/Users/%username%/Library/Application Support/tabula/latest/config/app.yaml”
  • For Windows: “C:\Users\%username%\AppData\Roaming\tabula\latest\config\app.yaml”
    Where "%username%" is your name of the user.

INTEGRATIONS

Can I connect {sourcename} to Tabula?
The application is currently in closed beta and undergoing heavy development. At this time, we have connectors for PostgreSQL and Snowflake. You can export the necessary tables from your database in *.csv format and import them into Tabula. MS Excel *.xslx format is also supported.
It's important to note that *.xslx files will only work if there are no Snowflake/PostgreSQL sources in the current flow.
We plan to provide more connectors in future releases. To stay updated on product developments and share feedback on which integrations you need the most, join our Slack Channel for Data Analysts and Engineers.
How many data sources can I add?
Theoretically, unlimited sources can be added. To be more precise, you can add up to 1024 different Source nodes.
Which sources or target connectors does Tabula have?
We provide full support for PostgreSQL and Snowflake connectors as sources, targets, and for executing flows. Additionally, we support CSV and *.xslx as a source and target.
Can't find the correct answer please ask our community in Slack #general_chat channel.