{"id":21526,"date":"2024-12-19T21:07:16","date_gmt":"2024-12-20T03:07:16","guid":{"rendered":"http:\/\/www.designandexecute.com\/designs\/?p=21526"},"modified":"2024-12-19T21:07:18","modified_gmt":"2024-12-20T03:07:18","slug":"reading-json-from-aws-s3-and-writing-to-amazon-rds-for-postgresql","status":"publish","type":"post","link":"https:\/\/www.designandexecute.com\/designs\/reading-json-from-aws-s3-and-writing-to-amazon-rds-for-postgresql\/","title":{"rendered":"Reading JSON from AWS S3 and Writing to Amazon RDS for PostgreSQL"},"content":{"rendered":"\n<h4 class=\"wp-block-heading\"><strong>AWS Prerequisites<\/strong><\/h4>\n\n\n\n<ol class=\"wp-block-list\"><li>\n<strong>AWS Setup<\/strong>:\n<ul><li><strong>Amazon S3<\/strong>: Ensure the JSON file is uploaded to an S3 bucket.<\/li><li><strong>Amazon RDS<\/strong>: Set up an Amazon RDS instance with PostgreSQL. Note the endpoint, database name, username, and password.<\/li><li><strong>IAM Role<\/strong>: Attach the appropriate IAM role to the instance or AWS credentials to access the S3 bucket.<\/li><\/ul>\n<\/li><li>\n<strong>PySpark Setup<\/strong>:\n<ul><li>Install the AWS SDK for Python (<code>boto3<\/code>), and make sure you have the PostgreSQL JDBC driver (<code>postgresql-&lt;version&gt;.jar<\/code>).<\/li><\/ul>\n<\/li><li>\n<strong>JSON File<\/strong>: Assume the JSON file is stored in S3:\n<ul><li>Bucket: <code>my-bucket<\/code><\/li><li>Key: <code>data\/data.json<\/code><\/li><\/ul>\n<\/li><\/ol>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Code Implementation<\/strong><\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>from pyspark.sql import SparkSession\n\n# Step 1: Initialize SparkSession\nspark = SparkSession.builder \\\n    .appName(\"S3 to RDS - JSON to PostgreSQL\") \\\n    .config(\"spark.jars\", \"\/path\/to\/postgresql-&lt;version>.jar\") \\\n    .config(\"spark.hadoop.fs.s3a.aws.credentials.provider\", \n            \"com.amazonaws.auth.DefaultAWSCredentialsProviderChain\") \\\n    .getOrCreate()\n\n# Step 2: Read JSON File from S3\ns3_bucket = \"s3a:\/\/my-bucket\/data\/data.json\"  # Use 's3a' for Spark with S3\ndf = spark.read.json(s3_bucket)\n\n# Step 3: Inspect the DataFrame\ndf.printSchema()\ndf.show()\n\n# Step 4: RDS PostgreSQL Connection Properties\nrds_url = \"jdbc:postgresql:\/\/&lt;rds-endpoint>:5432\/&lt;database>\"  # Replace with your RDS endpoint and database\nrds_properties = {\n    \"user\": \"&lt;your_username>\",       # Replace with your RDS username\n    \"password\": \"&lt;your_password>\",   # Replace with your RDS password\n    \"driver\": \"org.postgresql.Driver\"\n}\n\n# Step 5: Write DataFrame to PostgreSQL Table on RDS\ntable_name = \"public.user_data\"\ndf.write.jdbc(url=rds_url, table=table_name, mode=\"overwrite\", properties=rds_properties)\n\nprint(\"Data written successfully to Amazon RDS PostgreSQL!\")\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Detailed Changes for AWS<\/strong><\/h3>\n\n\n\n<ol class=\"wp-block-list\"><li>\n<strong>Read JSON from S3<\/strong>:\n<ul><li>Use the S3 URI format <code>s3a:\/\/&lt;bucket&gt;\/&lt;key&gt;<\/code>. Ensure you configure the Spark session to use the S3A connector with AWS credentials.<\/li><\/ul>\n<\/li><li>\n<strong>AWS Authentication<\/strong>:\n<ul><li>Use the <code>DefaultAWSCredentialsProviderChain<\/code> for credentials. This can retrieve credentials from the following sources:\n<ul><li>IAM roles on an EC2 instance or EMR cluster.<\/li><li>Environment variables (<code>AWS_ACCESS_KEY_ID<\/code>, <code>AWS_SECRET_ACCESS_KEY<\/code>).<\/li><li>AWS configuration files (<code>~\/.aws\/credentials<\/code>).<\/li><\/ul>\n<\/li><\/ul>\n<\/li><li>\n<strong>Amazon RDS<\/strong>:\n<ul><li>Update <code>rds_url<\/code> to include your RDS instance&#8217;s endpoint, database name, and port (default for PostgreSQL is 5432).<\/li><li>The security group associated with the RDS instance should allow inbound traffic from your Spark cluster or instance.<\/li><\/ul>\n<\/li><li>\n<strong>JDBC Driver<\/strong>:\n<ul><li>Provide the path to the PostgreSQL JDBC driver (<code>postgresql-&lt;version&gt;.jar<\/code>).<\/li><\/ul>\n<\/li><\/ol>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>IAM Role Configuration<\/strong><\/h3>\n\n\n\n<p>If running this on an EMR cluster or an EC2 instance, ensure the instance has an IAM role with the following permissions:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><code>s3:GetObject<\/code> for accessing the JSON file in S3.<\/li><li>Security group configuration to allow the instance to connect to the RDS instance.<\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Command to Execute the Script<\/strong><\/h3>\n\n\n\n<p>If saved as <code>s3_to_rds.py<\/code>, execute it using:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>spark-submit --jars \/path\/to\/postgresql-&lt;version>.jar s3_to_rds.py\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Notes<\/strong><\/h3>\n\n\n\n<ol class=\"wp-block-list\"><li> <strong>S3 Configuration<\/strong>: <ul><li>Ensure the JSON file&#8217;s bucket and key are correct.<\/li><li>Set up appropriate permissions for S3 access. <\/li><\/ul><\/li><li> <strong>Amazon RDS Configuration<\/strong>: <ul><li>Allow inbound traffic from your Spark cluster by adding the Spark instance&#8217;s security group to the RDS instance&#8217;s security group. <\/li><\/ul><\/li><li> <strong>Performance Optimization<\/strong>: <ul><li>Use <code>.repartition()<\/code> or <code>.coalesce()<\/code> if needed to optimize the number of partitions when writing to PostgreSQL. <\/li><\/ul><\/li><\/ol>\n\n\n\n<p>Running this workflow from <strong>AWS Glue<\/strong> involves leveraging Glue&#8217;s integration with S3, PySpark, and Amazon RDS. Glue simplifies the process by handling infrastructure and allowing you to focus on the ETL logic. Below is the step-by-step guide.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Steps to Read JSON from S3 and Write to RDS in AWS Glue<\/strong><\/h3>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>1. AWS Glue Setup<\/strong><\/h4>\n\n\n\n<ol class=\"wp-block-list\"><li>\n<strong>Create an AWS Glue Job<\/strong>:\n<ul><li>Go to the AWS Glue Console.<\/li><li>Create a new Glue job and choose the type as &#8220;Spark&#8221; with a Python shell (Glue supports PySpark).<\/li><li>Configure the IAM role with appropriate permissions for S3, RDS, and Glue.<\/li><\/ul>\n<\/li><li>\n<strong>IAM Role Permissions<\/strong>:\nEnsure the IAM role assigned to Glue has:\n<ul><li><code>s3:GetObject<\/code> permission for reading from S3.<\/li><li>Access to the Amazon RDS instance (via security groups).<\/li><li>Glue permissions for logging and job execution.<\/li><\/ul>\n<\/li><li>\n<strong>RDS Security Group<\/strong>:\n<ul><li>Ensure the RDS instance&#8217;s security group allows inbound connections from Glue workers. Add Glue&#8217;s public IP range or private VPC as necessary.<\/li><\/ul>\n<\/li><li>\n<strong>JDBC Connection<\/strong>:\n<ul><li>Go to the Glue Console and create a &#8220;JDBC Connection&#8221; for the RDS instance. Provide the following details:\n<ul><li>Connection type: PostgreSQL.<\/li><li>JDBC URL: <code>jdbc:postgresql:\/\/&lt;RDS-Endpoint&gt;:5432\/&lt;database&gt;<\/code>.<\/li><li>Username and password for the RDS instance.<\/li><\/ul>\n<\/li><li>Test the connection to ensure it&#8217;s reachable.<\/li><\/ul>\n<\/li><li>\n<strong>Upload the PostgreSQL JDBC Driver<\/strong>:\n<ul><li>Download the PostgreSQL JDBC driver (e.g., <code>postgresql-&lt;version&gt;.jar<\/code>).<\/li><li>Upload it to an S3 bucket.<\/li><li>Add the path to the JAR file in the Glue job&#8217;s &#8220;Dependent JARs path&#8221;.<\/li><\/ul>\n<\/li><\/ol>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>2. JSON File in S3<\/strong><\/h4>\n\n\n\n<p>Ensure the JSON file is stored in S3:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Bucket: <code>my-bucket<\/code><\/li><li>Key: <code>data\/data.json<\/code><\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>3. Glue Job Script<\/strong><\/h4>\n\n\n\n<p>Below is the Python script (PySpark) for the Glue job:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import sys\nfrom awsglue.transforms import *\nfrom awsglue.utils import getResolvedOptions\nfrom pyspark.context import SparkContext\nfrom awsglue.context import GlueContext\nfrom awsglue.job import Job\n\n# Initialize GlueContext and Spark\nargs = getResolvedOptions(sys.argv, [\"JOB_NAME\"])\nsc = SparkContext()\nglueContext = GlueContext(sc)\nspark = glueContext.spark_session\njob = Job(glueContext)\njob.init(args[\"JOB_NAME\"], args)\n\n# S3 Input Path\ns3_input_path = \"s3:\/\/my-bucket\/data\/data.json\"  # Update with your S3 bucket and file path\n\n# Step 1: Read JSON from S3 into a DataFrame\ndf = spark.read.json(s3_input_path)\n\n# Step 2: Inspect the DataFrame\ndf.printSchema()\ndf.show()\n\n# Step 3: Write DataFrame to Amazon RDS (PostgreSQL)\nrds_jdbc_url = \"jdbc:postgresql:\/\/&lt;rds-endpoint>:5432\/&lt;database>\"  # Update with your RDS endpoint and database\nrds_properties = {\n    \"user\": \"&lt;your_username>\",       # Replace with your RDS username\n    \"password\": \"&lt;your_password>\",   # Replace with your RDS password\n    \"driver\": \"org.postgresql.Driver\"\n}\n\n# Write the data to RDS table\ntable_name = \"public.user_data\"\ndf.write.jdbc(\n    url=rds_jdbc_url,\n    table=table_name,\n    mode=\"overwrite\",  # Use \"append\" if you want to add to existing data\n    properties=rds_properties\n)\n\n# Finalize job\njob.commit()\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>4. Configure the Glue Job<\/strong><\/h4>\n\n\n\n<ol class=\"wp-block-list\"><li>\n<strong>Script Location<\/strong>:\n<ul><li>Upload the script to an S3 bucket, and provide the S3 path in the Glue job configuration.<\/li><\/ul>\n<\/li><li>\n<strong>Dependent JARs Path<\/strong>:\n<ul><li>Add the S3 path to the PostgreSQL JDBC driver in the &#8220;Python library path \/ JAR files&#8221; field.<\/li><\/ul>\n<\/li><li>\n<strong>Arguments<\/strong>:\n<ul><li>Add the following arguments to the Glue job:\n<code>--conf spark.executor.extraClassPath=\/path\/to\/postgresql-&lt;version&gt;.jar\n--conf spark.driver.extraClassPath=\/path\/to\/postgresql-&lt;version&gt;.jar\n<\/code>\n<\/li><\/ul>\n<\/li><li>\n<strong>Worker Type and Count<\/strong>:\n<ul><li>Choose worker types and counts based on your data size and processing needs. A common configuration is <code>G.1X<\/code> with 2\u20135 workers for small-to-medium datasets.<\/li><\/ul>\n<\/li><li>\n<strong>Timeout<\/strong>:\n<ul><li>Set an appropriate timeout for the Glue job.<\/li><\/ul>\n<\/li><\/ol>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>5. Execute the Glue Job<\/strong><\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li>Start the job from the Glue console or trigger it using AWS Glue workflows or event-based triggers.<\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>AWS-Specific Considerations<\/strong><\/h3>\n\n\n\n<ol class=\"wp-block-list\"><li> <strong>RDS Networking<\/strong>: <ul><li>Ensure the Glue job can access RDS via a public\/private subnet in the same VPC.<\/li><li>If using private subnets, set up an <strong>AWS Glue Connection<\/strong> to use the VPC for accessing RDS. <\/li><\/ul><\/li><li> <strong>Data Volume<\/strong>: <ul><li>For large datasets, partition the data in S3 and use <code>.repartition()<\/code> in Spark for optimized writing to RDS. <\/li><\/ul><\/li><li> <strong>Glue Data Catalog<\/strong>: <ul><li>Optionally, you can catalog the JSON file as a table in Glue and use <code>glueContext.create_dynamic_frame.from_catalog()<\/code> instead of <code>spark.read.json()<\/code>. <\/li><\/ul><\/li><li> <strong>Monitoring<\/strong>: <ul><li>Use CloudWatch Logs to monitor job progress and troubleshoot issues. <\/li><\/ul><\/li><\/ol>\n\n\n\n<p>The schema of the target table in PostgreSQL can be determined in one of the following ways:<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1. Schema Inference from the DataFrame<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\"><li>PySpark automatically infers the schema of the JSON file when reading it into a DataFrame using <code>spark.read.json()<\/code>.<\/li><\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">Example:<\/h4>\n\n\n\n<p>Given the following JSON file:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>[\n    {\"id\": 1, \"name\": \"Alice\", \"age\": 25, \"city\": \"New York\"},\n    {\"id\": 2, \"name\": \"Bob\", \"age\": 30, \"city\": \"Los Angeles\"}\n]\n<\/code><\/pre>\n\n\n\n<p>PySpark will infer the schema as:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>root\n |-- id: long (nullable = true)\n |-- name: string (nullable = true)\n |-- age: long (nullable = true)\n |-- city: string (nullable = true)\n<\/code><\/pre>\n\n\n\n<p>When using the <code>write.jdbc()<\/code> method, PySpark will map the inferred schema to the target table in PostgreSQL.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2. Existing PostgreSQL Table Schema<\/strong><\/h3>\n\n\n\n<p>If the table already exists in PostgreSQL, PySpark will try to match the DataFrame&#8217;s schema to the target table&#8217;s schema.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Column Matching<\/strong>: The DataFrame&#8217;s column names and data types must align with the target table&#8217;s schema.<\/li><li>If there\u2019s a mismatch (e.g., missing columns, extra columns, or incompatible data types), the job will fail unless the table is created beforehand or adjusted.<\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>3. Automatically Creating the Table<\/strong><\/h3>\n\n\n\n<p>If the table does not exist in PostgreSQL and the <code>write.jdbc()<\/code> method is used, Spark will attempt to create the table using the schema of the DataFrame.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>This requires the <code>user<\/code> provided in the JDBC connection to have the necessary permissions to create tables in PostgreSQL.<\/li><li>The created table will have columns based on the DataFrame schema, and PostgreSQL will map Spark data types to equivalent PostgreSQL types. For example:\n<ul><li><code>string<\/code> \u2192 <code>VARCHAR<\/code><\/li><li><code>long<\/code> \u2192 <code>BIGINT<\/code><\/li><li><code>double<\/code> \u2192 <code>DOUBLE PRECISION<\/code><\/li><\/ul>\n<\/li><\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">Example of Table Creation<\/h4>\n\n\n\n<p>For the schema above, PySpark will create the following table in PostgreSQL:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>CREATE TABLE user_data (\n    id BIGINT,\n    name VARCHAR,\n    age BIGINT,\n    city VARCHAR\n);\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>4. Providing the Schema Explicitly<\/strong><\/h3>\n\n\n\n<p>You can explicitly define the schema for the DataFrame before writing it to PostgreSQL. This is useful if:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>The JSON file does not contain all the columns in the target table.<\/li><li>You want to enforce strict typing for the DataFrame schema.<\/li><\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">Example of Explicit Schema Definition:<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>from pyspark.sql.types import StructType, StructField, StringType, IntegerType\n\nschema = StructType([\n    StructField(\"id\", IntegerType(), True),\n    StructField(\"name\", StringType(), True),\n    StructField(\"age\", IntegerType(), True),\n    StructField(\"city\", StringType(), True)\n])\n\ndf = spark.read.schema(schema).json(\"s3:\/\/my-bucket\/data\/data.json\")\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>5. Handling Schema Mismatches<\/strong><\/h3>\n\n\n\n<p>To ensure the schema matches the PostgreSQL table:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li><strong>Ensure the DataFrame columns align with the table schema<\/strong>:\n<ul><li>Use <code>select()<\/code> to reorder columns or drop extra columns:\n<code>df = df.select(\"id\", \"name\", \"age\", \"city\")\n<\/code>\n<\/li><\/ul>\n<\/li><li><strong>Cast Data Types Explicitly<\/strong>:\n<ul><li>Use <code>cast()<\/code> to adjust data types if needed:\n<code>from pyspark.sql.functions import col\ndf = df.withColumn(\"id\", col(\"id\").cast(\"int\"))\n<\/code>\n<\/li><\/ul>\n<\/li><\/ol>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>6. Schema Validation<\/strong><\/h3>\n\n\n\n<p>Before writing to the table, validate the DataFrame schema against the target table schema using PostgreSQL queries.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Example of Query to Check Table Schema in PostgreSQL:<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>SELECT column_name, data_type \nFROM information_schema.columns \nWHERE table_name = 'user_data';\n<\/code><\/pre>\n\n\n\n<p>Compare the result with the DataFrame schema using <code>df.printSchema()<\/code>.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Summary of Steps<\/strong><\/h3>\n\n\n\n<ol class=\"wp-block-list\"><li>Ensure the JSON file structure aligns with the PostgreSQL table schema.<\/li><li>If the table doesn\u2019t exist:\n<ul><li>Let Spark create the table automatically, or<\/li><li>Create it manually in PostgreSQL using <code>CREATE TABLE<\/code>.<\/li><\/ul>\n<\/li><li>Validate schemas by inspecting the DataFrame (<code>df.printSchema()<\/code>) and the table structure.<\/li><li>Explicitly define or adjust the schema in PySpark, if necessary.<\/li><\/ol>\n","protected":false},"excerpt":{"rendered":"<p>AWS Prerequisites AWS Setup: Amazon S3: Ensure the JSON file is uploaded to an S3 bucket. Amazon RDS: Set up an Amazon RDS instance with PostgreSQL. Note the endpoint, database name, username, and password. IAM Role: Attach the appropriate IAM role to the instance or AWS credentials to access the S3 bucket. PySpark Setup: Install [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":18113,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[31],"tags":[],"class_list":["post-21526","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bi-data-warehouse"],"jetpack_featured_media_url":"https:\/\/www.designandexecute.com\/designs\/wp-content\/uploads\/2023\/04\/pyspark.jpg","_links":{"self":[{"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/posts\/21526","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/comments?post=21526"}],"version-history":[{"count":1,"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/posts\/21526\/revisions"}],"predecessor-version":[{"id":21527,"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/posts\/21526\/revisions\/21527"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/media\/18113"}],"wp:attachment":[{"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/media?parent=21526"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/categories?post=21526"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.designandexecute.com\/designs\/wp-json\/wp\/v2\/tags?post=21526"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}