Customize Application Information
Set Basic Application Information
Basic application information affects website text display, copyright entity display, OAuth callback after authorization, redirect after payment, and other functions.
Please set the basic application information according to your project's actual situation before deploying the project.
Set the project name and deployment URL in the environment variable file.
For local development environment, set in .env.development:
# app
NEXT_PUBLIC_APP_URL = "http://localhost:3000"
NEXT_PUBLIC_APP_NAME = "My ShipAny Project"For production environment, set in .env.production:
# app
NEXT_PUBLIC_APP_URL = "https://your-domain.com"
NEXT_PUBLIC_APP_NAME = "My ShipAny Project"Set Application Icons
Application icons include Logo, Favicon, etc., and should match your brand image.
- The default website Logo file is located at:
public/logo.png - The default favicon file is located at:
public/favicon.ico
Please design these two icons for your project and replace the default icon files.
Recommended design tools:
Or use AI-assisted design tools such as Nano Banana Pro and ChatGPT.
After replacing the default icon files, open the project homepage and refresh to see that the Logo at the top and the browser tab icon have been updated to your custom icons.
Set sitemap.xml
sitemap.xml is a file for search engines to index websites, used to tell search engines which pages need to be indexed.
Before deploying your website, please set the public/sitemap.xml file and update it with your website's page list.
<?xml version='1.0' encoding='utf-8' standalone='yes'?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://your-domain.com/</loc>
<lastmod>2025-11-15T10:00:00+00:00</lastmod>
</url>
<url>
<loc>https://your-domain.com/blog</loc>
<lastmod>2025-11-15T10:00:00+00:00</lastmod>
</url>
<url>
<loc>https://your-domain.com/showcases</loc>
<lastmod>2025-11-15T10:00:00+00:00</lastmod>
</url>
</urlset>Set robots.txt
robots.txt is a rules file for search engine crawlers to access websites, used to tell search engines which pages need to be indexed and which pages should not be indexed.
Before deploying your website, please set the public/robots.txt file and update it with your website's crawler access rules.
The default robots.txt file content is as follows:
User-agent: *
Disallow: /*?*q=
Disallow: /privacy-policy
Disallow: /terms-of-service
Disallow: /settings/*
Disallow: /activity/*
Disallow: /admin/*Set Application Agreements
Application agreements include privacy policy, terms of service, etc., and should match your project positioning and business.
Agreement file content is located in the content/pages directory, including:
privacy-policy.mdxterms-of-service.mdx
If your project supports multiple languages, there will also be corresponding language version agreement files in the content/pages directory, such as:
privacy-policy.zh.mdxterms-of-service.zh.mdx
Please modify the agreement file content according to your project's actual situation.
You can use AI to assist in generating agreement file content. Reference prompt:
The project I am developing is a code template that can quickly build AI SaaS,
please refer to the content on the webpage: https://shipany.ai/docs,
help me modify the protocol file under content/pages.
My project name is: ShipAny Two,
domain is: https://cf-two.shipany.site,
the copyright holder is: ShipAny.AI,
contact email is: support@shipany.aiAfter modifying the agreement file content, open the corresponding agreement file page to check if the content is correct.
- Privacy Policy page:
/privacy-policy - Terms of Service page:
/terms-of-service