Accelerate your thought leadership by contributing to our blog. Join our community of experts now!
At the moment, MySQL, PostgreSQL, Microsoft Azure SQL and Amazon Redshift are supported out of the box on Databox. You have the data ready in your database, now it just needs to get visualized in an easy and concise manner so everyone – even your boss – can use it.
So, let’s make this happen!
What Will We Accomplish?
Firstly, we’ll setup a PostgreSQL server, then we’ll connect it to Databox and confirm that the connection is working. Lastly, we will create a Datacard visualizing the data. All this without a single line of code – except for the PostgreSQL query, of course 🙂
To get started, you’ll need to allow remote connections from Databox’s public IP 184.108.40.206 to your database. The necessary steps to do that will depend on your database, server infrastructure and firewall. Our IP must be able to connect to the database port directly.
For our purposes, let’s assume your PostgreSQL server is up and running, reachable from internet addresses and accepting TCP/IP connections on some TCP port (5432 by default).
We will create a user named databox_ro with permissions to remotely access the database, aptly named mydb in our example:
CREATE USER databox_ro WITH PASSWORD 'secretPassword';
GRANT ALL ON DATABASE mydb to databox_ro;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO databox_ro; -- at least 9.0
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO databox_ro; -- at least 9.0
Note: for the sake of this example, we have given this user a SELECT permission on all tables in our database. Permission could be given to select from one table only, or better yet: create a custom view and give permissions to select from this view only. But this is a bit beyond the scope of this document. A basic example on how to achieve this can be read here.
PostgreSQL up to v9.0 has a bit different syntax, so check the docs.
Configure PostgreSQL server to listen on all IPs, as by default it listens on local interface only. Open /etc/postgresql/9.5/main/postgresql.conf (default for Ubuntu and Debian) and check that it contains:
listen_addresses = '*' # Listen on all host IP addresses available
port = 5432 # Listen on this port, default is 5432
If you’ve made any changes, restart the PostgreSQL server and it’s ready to accept remote connections.
service postgresql restart
Add created user access from Databox IP to our newly created database. Add the following to pg_hba.conf and reload the database:
host mydb databox_ro 220.127.116.11 255.255.255.255 md5
Port 5432/TCP, which is PostgreSQL default port, should be accessible from our VPC IP mentioned above. This must be done on your firewall, Linux iptables example:
iptables -A FORWARD -s 18.104.22.168/32 -p tcp --dport 5432 -j ACCEPT
Note: you might have to substitute FORWARD with INPUT if your Linux box has a public IP and the database runs on the host itself.
Your server should now be successfully set up to accept requests from our VPC IP to your chosen database/table, using a chosen user and password.
Load some sample data and make some queries in the local console:
sudo -u postgres psql
With SSL support compiled in, the PostgreSQL server can be started with SSL enabled by setting the parameter ssl to on in postgresql.conf. The server will listen for both normal and SSL connections on the same TCP port, and will negotiate with any connecting client on whether to use SSL.
With our database ready, the next logical step is to connect it and test that it’s returning the data we want.
Great! We have just successfully connected our database to Databox. In the next step, we’ll write a custom query that will regularly fetch data from your database and make it available for use in any Datacard.
Troubleshooting: If you get a “wrong credentials” message, double-check your user data. If you’re stuck on ‘Activate’ for a minute or so, it’s probably having issues connecting to your database host due to firewall / server / networking issues.
Just check the SSL checkbox in New connection dialog and try connecting. If everything is setup OK with server, client should connect via SSL encrypted connection.
Now that the database is connected, we will use the Designer to query, shape and display the data in a format that’s most appropriate and useful for our needs:
SELECT COUNT(p.ID) AS posts, u.display_name, p.date AS date
FROM dbwp_users u, dbwp_posts p
WHERE p.post_author = u.ID AND p.post_type = 'post'
GROUP BY u.ID
We have just written a custom SQL query and displayed its results. Databox will continuously, each hour, fetch data from this resource and store it in the selected target data source (in our example ‘My PostgreSQL’).
Each query must contain a date column containing a valid date, named date. Let’s take a following SQL query for example:
SELECT salary_date AS date, salary FROM employees
In table employees we have a date column named salary_date. As Databox expects column with a name date, we select our salary_date column as date.
Salary is another column, containing a number, column name will be pushed as metric key named salary. This query is valid and can be pushed to Databox.
Troubleshooting: If you don’t see any data, double-check your SQL query, try it directly on your database. If it’s not displaying results there, you have an error somewhere in your query. Also check that PostgreSQL user has necessary permissions to access the database from Databox IP.
Well done! Your database is now connected to Databox, queries can be executed and then displayed on your mobile / big screen / computer.
Go ahead and explore further. Add more queries, add blocks, explore different types of visualizations. Make that perfect Datacard (or Datawall of course) you always needed but didn’t know how to get. Now you can! Clean and professional, right at your fingertips. Only data that matters, without clutter. The possibilities are truly endless.
Ready to try it for yourself? Signup for free today and let us know how it went for you.
Remember: we’re always glad to help if you run into any obstacles!
Get practical strategies that drive consistent growth
Latest from our blog