FAQ
The flow is a streamlined process that transforms your data, beginning with the "Source" node and ending with the "Output" node. This is similar to a file in traditional applications.
You can create new flows, rename them, make duplicates, export them, and delete them. Each table displayed within the flow is simply a preview of the final result and is not saved.
A job is a process of executing the entire flow and saving the output in a file.
The choice of resources for flow execution depends on the configuration of the "Source" and "Output" nodes. If Snowflake or PostgreSQL is connected, tabula will execute the flow using their resources.
If "Local File" is selected, execution will take place on your computer. To ensure proper job execution, the Tabula app must be running, and the computer should not be in sleep mode.
No, the application only verifies your credentials during the initial launch.
Therefore, it is safe to use sensitive information in your flows as all data is stored on your local computer. We are actively working on implementing cloud features.
Yes, handling large databases is one of our main priorities. If you encounter unique problems when working with large databases, please report this issue in the #general_chat channel.
Yes, we currently support scalar user-defined functions. To create them, please follow these instructions.
It is important to note that any changes should only be made if you are fully aware of what you are doing. Remember to only include functions that are already working in the
body
.You can find custom-created functions in the directory provided. They are provided as examples. Open the *.json file in your preferred text editor (such as Notepad++ or Sublime), modify the parameters, and save with a new name. Restart Retable and use them.
- For Mac OS: “/Users/%username%/Library/Application Support/tabula/latest/udfs”
- For Windows: “C:\Users\%username%\AppData\Roaming\tabula\latest\udfs”Where "
%username%
" is the name of the user.
- Open your dataset in Catalog and expand to full screen with “Open in Explorer” button
- Click “Add to flow”
- On the toolbar find a node “GPT column” and add it to the canvas
- Use the prompt window on the property grid to write your custom Prompt
- If you need to send values from any column, mention it with
- Click on the “send” icon inside the prompt window
Go to the settings file and add your token to the field “token”. Restart the app.
- For Mac OS: “/Users/%username%/Library/Application Support/tabula/latest/config/app.yaml”
- For Windows: “C:\Users\%username%\AppData\Roaming\tabula\latest\config\app.yaml”Where "
%username%
" is your name of the user.
The application is currently in closed beta and undergoing heavy development. At this time, we have connectors for PostgreSQL and Snowflake. You can export the necessary tables from your database in *.csv format and import them into Tabula. MS Excel *.xslx format is also supported.
It's important to note that *.xslx files will only work if there are no Snowflake/PostgreSQL sources in the current flow.
We plan to provide more connectors in future releases. To stay updated on product developments and share feedback on which integrations you need the most, join our Slack Channel for Data Analysts and Engineers.
Last modified 1mo ago