You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 18, 2020. It is now read-only.
I'm moving few data to Redshift on daily basis. The data are copied to Redshift with the help of a shell script which uses PSQL for inserting data into Redshift from a CSV file.
Since it runs every day and takes data from last one week, lot of duplicate data are inserted. So to avoid this I compute hash using MD5, and using the hash, I insert only the new data and ignore the duplicate one. But PSQL is not computing the hash correctly. Means that when I compute row_hash with the same query form SQLWorkbench, it works fine, but now with PSQL.
The shell script which performs the above task is stored in S3.
Code wise everything is fine. Because when I execute the same query from the Workbench, I don't find any problem.
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
I'm moving few data to Redshift on daily basis. The data are copied to Redshift with the help of a shell script which uses PSQL for inserting data into Redshift from a CSV file.
Since it runs every day and takes data from last one week, lot of duplicate data are inserted. So to avoid this I compute hash using MD5, and using the hash, I insert only the new data and ignore the duplicate one. But PSQL is not computing the hash correctly. Means that when I compute row_hash with the same query form SQLWorkbench, it works fine, but now with PSQL.
The shell script which performs the above task is stored in S3.
Code wise everything is fine. Because when I execute the same query from the Workbench, I don't find any problem.
Thanks in advance.
The text was updated successfully, but these errors were encountered: