Amazon Simple Storage Service (Amazon S3) is an object storage service that offers scalability, data availability, security, and performance. Customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.
Amazon S3 provides easy-to-use management features to organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements.
Amazon S3 is designed for high reliability and durability, and stores data for millions of applications for companies all around the world.
Nasuni provides file data services, including file storage, backup, ransomware protection, and VPN-less hybrid access.
Nasuni delivers a data storage platform using object storage delivering a simpler, lower cost, and more efficient cloud solution that scales to handle rapid unstructured data growth. Department, project, and organizational file share and application workflows are at the heart of your firm's productivity. Nasuni and Amazon S3 object storage deliver a modern file infrastructure that spans any number of locations, eases administration, and costs less.
Prerequisites
This document assumes that the customer has the following in place:
An Amazon Web Services (AWS) subscription. For details, see How do I create and activate a new AWS account?.
An Identity and Access Management (IAM) user account. For details, see Creating an IAM user in your AWS account.
For the IAM user account, an Access key and a Secret access key for use with Amazon S3. For details, see Managing access keys for IAM users
We do provide a procedure for creating an Access key and a Secret access key. See step 16 on page 28.Amazon IAM Policies. See Creating IAM policies.
On-premises connectivity to Amazon Cloud Network, such as the following:
AWS Direct Connect. For details, see What is AWS Direct Connect?.
AWS Virtual Private Network. For details, see AWS Virtual Private Network Documentation.
Using Identity and Access Management (IAM) with Nasuni
Nasuni stores all data in Amazon S3 buckets, including all versions, in UniFS format. Each individual Nasuni volume corresponds to an individual Amazon S3 bucket.
Tip: By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit increase to AWS. For details, see Bucket restrictions and limitations.
It is important to define an access policy that permits necessary operations but prevents unwanted outcomes, such as the accidental deletion of a bucket used by Nasuni. You can use Identity and Access Management (IAM) to define several different types of access.
Important: You must have created an Amazon Simple Storage Service account. See http://aws.amazon.com/s3/.
Important: Confirm that your Nasuni account is configured to supply your Amazon S3 credentials. On the Cloud Credentials page, “Amazon S3” or “Amazon S3 GovCloud” should be on the available Cloud Credential providers list. If neither is present, contact Sales or Nasuni Support to enable one or more in your license.
Configuring AWS S3 permissions for Nasuni
Policies are JSON documents in AWS that let you specify who has access to AWS resources and what actions they can perform on those resources. You can attach a policy to an identity or resource to define their permissions. AWS evaluates these policies when the IAM principal makes a request. Permissions in the policies determine whether the request is allowed or denied. For details, see Creating IAM policies.
Nasuni requires the following permissions for AWS S3 Buckets and Objects:
PutObject
GetObject
DeleteObject
CreateBucket
ListBucket
ListAllMyBuckets
GetBucketLocation
DeleteBucket
To configure the required Amazon S3 IAM permissions policy, using JSON, follow these steps:
Go to the Identity and Access Management (IAM) page. Click Policies. Click Create policy. Click the JSON tab.
Create the following permissions policy:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:DeleteBucket"
],
"Resource": "arn:aws:s3:::nasuni*"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::nasuni*/*"
}
]
}
Click Review policy.
Enter a Name for this policy, then click Create policy.
The new policy appears in the list of permissions policies.
You can then use this policy for groups and users that only need the minimum access that Nasuni requires.
Preventing accidental deletion by other AWS Tenant Users (Optional)
To limit access for non-Nasuni-S3-user accounts, and to protect your Nasuni data in Amazon S3 from accidental deletions, the IAM permissions policy below must be applied. This permissions policy restricts other AWS accounts from accessing the S3 bucket by preventing Delete Actions for Nasuni buckets.
Note: There are other possibilities to achieve the same result, such as a strict policy to implement IaC (Infrastructure as Code) while all interactive console administrative users have read permissions only.
Warning: With this limited policy, Nasuni cannot remove snapshots as part of Snapshot Retention, or delete volumes.
To prevent accidental deletion, follow the following steps:
Go to the Identity and Access Management (IAM) page. Click Policies. Click Create policy. Click the JSON tab.
Create a permissions policy preventing Delete Actions for Nasuni buckets.
Note: All Nasuni buckets start with “nasunifiler*”.
Create a permissions policy such as the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
· "Effect": "Deny",
"Action": [
"s3:DeleteObject",
"s3:DeleteBucket"
],
"Resource": "arn:aws:s3:::nasunifiler*"
}
]
}
Click Review policy.
Enter a Name for this policy, then click Create policy.
The new policy appears in the list of permissions policies.Attach the new policy to any Groups that have administrative access to the AWS console by following these steps:
Select the new policy from the list of Policies.
From the Policy actions drop-down list, select Attach, then select the Groups and click Attach policy.
The policy is attached to the groups.
Attach the new policy to any Users that have administrative access to the AWS console by following these steps:
Select the new policy from the list of Policies.
From the Policy actions drop-down list, select Attach, then select the Users and click Attach policy.
The policy is attached to the users.
Specifically, test the effectiveness of the policy with the IAM Policy Simulator by following these steps:
From the Identity and Access Management (IAM) menu, click Dashboard, then click Policy Simulator on the right side. The IAM Policy Simulator appears.
Create a simulation by selecting items such as the following:
Group: such as Admin.
For that group, a Policy: such as the new policy.
From the Select service drop-down list: Amazon S3.
From the Select actions drop-down list: such as DeleteBucket.
Click Run Simulation, then examine results.
For example, the Admin group with a Nasuni bucket deletion prevention policy assigned has the permission denied when trying to simulate DeleteBucket for a bucket resource that starts with “nasunifiler*”.
Storage Classes
Nasuni supports the following Amazon S3 online storage classes:
Amazon S3 Standard - Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. This is a great option for general purpose use.
Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) - The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. This is a great option for unknown or changing access patterns.
Amazon S3 Standard-Infrequent Access (S3 Standard-IA) - Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee.
Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) - Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. Unlike other Amazon object storage classes, which store data in at least three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ.
Glacier Instant Retrieval - The Amazon S3 Glacier Instant Retrieval storage class delivers the lowest cost storage for long-lived data that is rarely accessed and requires milliseconds retrieval. This is a great option for data that does not need to be retrieved in less than 90 days.
Important: The Amazon S3 Glacier Instant Retrieval storage class is supported only within a Lifecycle Policy. Nasuni cannot write directly to Glacier Instant Retrieval.
Transitioning objects to other Nasuni-supported Amazon S3 storage classes
You can use lifecycle policies to define actions that you want Amazon S3 to take for objects in a bucket during an object's lifetime, such as transitioning objects to another storage class.
Based on Nasuni’s analysis of aggregated customer usage patterns, most data written to a Nasuni volume is read by other appliances sharing a volume within a day or so, and, after that, either remains in an appliance’s cache or is read infrequently. Therefore, transitioning to a less expensive tier of storage typically reduces costs.
Nasuni-supported Amazon S3 storage classes include Standard-IA, Intelligent-Tiering, One Zone-IA, and Glacier Instant Retrieval.
Considerations for choosing a transitions policy include the following:
After objects are transitioned from one tier to another, they cannot be transitioned back. For example, suppose you set up a lifecycle policy that moves objects from Infrequently Accessed (IA) to Glacier Instant Retrieval (IR) after 120 days. After the data is moved to Glacier IR, even if that data is accessed, it always remains in Glacier IR. It is not transitioned back to the Infrequently Accessed tier.
Nasuni's aggregated data analysis shows that Glacier Instant Retrieval saves more than using Standard IA.
If you are using Snapshot Retention, be aware that Glacier Instant Retrieval bills for a minimum of 90 days, while Standard IA bills for a minimum of only 30 days.
If you are using a Snapshot Retention policy that keeps fewer than 60 days of snapshots, you might not want to create a lifecycle policy at all, since waiting 30 days to transition to Standard IA and then keeping the data for fewer than 30 days might not be cost effective.
Similarly, anyone with a Snapshot Retention policy of less than 120 days might not want to transition to Glacier Instant Retrieval, since deleting the data before this still incurs a 90-day Glacier Instant Retrieval charge.
In addition, transitioning to Glacier Instant Retrieval incurs a one-time charge.
It’s difficult to predict what to expect for bills. For example, if you apply a lifecycle policy on a newly created bucket, you might not even notice the transition charge, since it happens on a rolling basis as data is written to the bucket. However, on an existing volume, the one-time charge for the transition of existing data would likely be noticeable. That said, Nasuni’s analysis of aggregated customer billing data shows that, when transitioning data to less expensive tiers of storage, you should break even within 2-3 months, and that data held any longer is at a significant savings.
Considerations when transitioning objects using Amazon S3 Lifecycle include the following:
Use S3 Lifecycle rule actions to “Move current versions of objects between storage classes”. The rest of the options do not apply to Nasuni.
Objects can only be transitioned between online Amazon S3 Storage Classes.
For more information on Amazon S3 lifecycle policy, see Transitioning objects using Amazon S3 Lifecycle.
Protecting against accidental deletion using AWS Bucket Versioning
You can use bucket versioning and lifecycle policies to add extra protection against accidental deletion. Nasuni recommends the following settings for any S3 bucket used for a Nasuni volume:
From the Bucket name list, click the name of the bucket that you want to enable versioning for.
Click the Properties tab.
In the Bucket Versioning area, click Edit. The Edit Bucket Versioning page appears.
Select Enable, and then click Save changes.
We additionally recommend creating a lifecycle rule to expire previous object versions after a specified time. This ensures that you continue to realize cost savings from deleted files, and on any volumes with a snapshot retention policy.
Click the Management tab, and then, in the Lifecycle rules area, click Create lifecycle rule. The Create lifecycle rule page appears.In the Lifecycle rule name text box, type a name for your rule, to help identify your rule later.
Select “Apply to all objects in the bucket”.
Then select “I acknowledge that this rule will apply to all objects in the bucket.”.
The option “Limit the scope of this rule using one or more filters” is not supported by Nasuni.In the Lifecycle rule actions area, enable “Permanently delete noncurrent versions of objects”.
The “Permanently delete noncurrent versions of objects” area appears.
Enter the “Days after objects become noncurrent”. Nasuni recommends 30 days. Selecting a longer duration involves a tradeoff between limiting potential cost savings, but receiving the extra protection for that duration.In the Lifecycle rule actions area, enable “Delete expired object delete markers or incomplete multipart uploads”.
The “Delete expired object delete markers or incomplete multipart uploads” area appears.
Make sure that both “Delete expired object delete markers” and “Delete incomplete multipart uploads” are unchecked. Nasuni does not support these options.Verify the settings for your rule. Click Create rule.
If the rule does not contain any errors, it is created, listed on the Lifecycle page, and enabled.
Prefix-Based Tiering of Objects (Optional)
Customers who understand their workflow pattern requirements and wish to reduce their storage cost can use object prefix-based rules to move their data objects to a less expensive tier while maintaining all the metadata objects in the standard tier, in case the use case involves filesystem search, indexing, scanning, or classification. It is important to point out that deep scan workflows incur a cost if the data is retrieved sooner than the default time allowed by the storage class.
Nasuni Data objects have the prefix designation of ‘1.’ (the number One followed by a period) at the beginning of the object name. Prefix-based rules can be put in place to move this object type to a less expensive online tier.
Nasuni Cloud Credential Configuration
Nasuni provides a Nasuni Connector for Amazon S3 and a separate connector for Amazon S3 GovCloud.
Tip: If you have a requirement to change Cloud Credentials on a regular basis, and if you are using NMC version 22.2 or later, NMC APIs can automate the process of updating existing credentials.
Tip: If you have a requirement to change Cloud Credentials regularly, use the following procedure, preferably outside office hours:
Obtain new credentials. Credentials typically consist of a pair of values, such as Access Key ID and Secret Access Key.
On the Cloud Credentials page, edit the cloud credentials to use the new credentials.
For existing in-use cloud credentials, updating only the access key and secret on a 9.8+ Edge Appliance takes effect immediately. Updating the hostname in the cloud credentials takes effect on the next snapshot that contains unprotected data.
Manually performing a snapshot also causes the change in cloud credentials to be registered, even if there is no unprotected data for the volume.
After each Edge Appliance has performed such a snapshot, the original credentials can be retired with the cloud provider.
Warning: Do not retire the original credentials with the cloud provider until you are certain that they are no longer necessary. Otherwise, data might become unavailable.
Tip: With on-premises object storage, if you are using an HTTPS proxy, consider including the hostnames of the target endpoint for the on-premises object storage in the Do Not Proxy specification. Otherwise, the proxy server might not allow data traffic or might slow down data traffic.
Tip: With on-premises object storage, create two DNS entries (“A” records) for the target endpoint of the on-premises object storage that point to the desired IP address, in this form:
hostname.your.domain
*.hostname.your.domain
This is necessary so that the Nasuni Edge Appliance can connect to the volumes using, for example, bucketname.hostname.your.domain.
To configure Nasuni for Amazon S3, follow these steps:
Ensure that port 443 (HTTPS) is open between the Nasuni Edge Appliance and the object storage solution.
Click Configuration. On NMC, click Account.
Select Cloud Credentials.
Click Add Amazon S3 Credentials.
Alternatively, click Add New Credentials, then select Amazon S3 from the drop-down menu.Enter the following information from the Amazon S3 cluster:
Name: A name for this set of credentials, which is used for display purposes, such as AWSCredentials1.Access Key: Amazon S3 Access Key for this set of credentials, which can be obtained as described in step 16 on page 28.
Secret Access Key: The Amazon S3 Secret Access Key for this set of credentials, which can be obtained as described in step 16 on page 28.
Hostname: Use the default setting: s3.amazonaws.com.
On NMC 23.1 and Edge Appliance 9.10 and earlier versions, use region-specific hostnames for the following regions:
Location | Code | Hostname |
Africa (Cape Town) | af-south-1 | |
Asia Pacific (Hong Kong) | ap-east-1 | |
Europe (Milan) | eu-south-1 | |
Middle East (Bahrain) | me-south-1 |
Tip: To use one of these regions, ensure that the region is enabled in the customer Amazon S3 account. For details, see Setting permissions to enable accounts for upcoming AWS Regions.
Using region-specific hostnames in cloud credentials is not required for creating new volumes on NMC 23.2 and Edge Appliance 9.12 and later versions. Based on the region selected, NMC 23.2 and Edge Appliance 9.12 and later versions use region-specific hostnames to create the volume.
Note: Since March 20, 2019, customers must opt into new AWS regions before using them to create a new volume. The procedure to opt into a region can be found here along with the list of regions.
Verify SSL Certificates: Use the default enabled setting.
Skip Validation (optional): New cloud credentials are validated against us-east-1 by default. To avoid validation of new and unused credentials, skip this validation. Cloud credentials are validated during volume creation against the selected region.
Note: Volume creation fails with invalid cloud credentials.
Note: To ensure connectivity, cloud credentials are validated on update.
Note: Region-specific hostnames are validated against their region.
Filers (on NMC only): Select the target Nasuni Edge Appliances.
Click Save Credentials.
At this point, you can begin adding volumes to the Nasuni Edge Appliance.
Tip: If you have a requirement to change Cloud Credentials on a regular basis, with NMC version 22.2 and later, NMC APIs can automate the process of updating existing credentials.
Adding volumes
To add volumes with Amazon S3, follow these steps:
Click Volumes, then click Add New Volume. The Add New Volume page appears.
Enter the following information for the new volume:
Name: Enter a human-readable name for the volume.
Cloud Provider: Select Amazon S3.
Credentials: Select the Cloud Credentials that you defined in step 5 on page 15 for this volume, such as AWSCredentials1.
Region: From the drop-down list, select the region. For details on regions, see https://docs.aws.amazon.com/general/latest/gr/s3.html.
For the remaining options, select what is appropriate for this volume.Click Save.
This creates a new volume with Amazon S3.
Other supported endpoints
Several endpoints are supported.
AWS PrivateLink for Amazon S3
S3 PrivateLink lets you access S3 buckets from on-premises locations or from within a VPC, without using public IPs or the internet. This can improve security by keeping your data traffic within a private network.
AWS PrivateLinks are region-specific and VPC-specific endpoints. Edge Appliances using the PrivateLink should be deployed within the same VPC or should have access to the VPC’s private network using VPC Peering, AWS Direct Connect, or VPN tunnel.
Note: New cloud credentials with PrivateLink as the hostname are validated against the endpoint's region.
Version compatibility considerations
NEA versions 9.15 and later, and NMC versions 24.1 and later, support AWS PrivateLink.
To use PrivateLink for a shared volume, all connected Edge Appliances must be on NEA version 9.15 or later.
Only Edge Appliances running version 9.15 or later can connect to a volume using PrivateLink.
Enabling PrivateLink
To enable PrivateLink, follow these steps:
From the AWS console, create a PrivateLink endpoint for S3. Steps to create a PrivateLink endpoint are outlined at Create a VPC endpoint.
On the NMC Cloud Credentials page, create a new Cloud Credential or update an existing one by replacing the Hostname with the newly created PrivateLink.
For AWS-provided PrivateLink DNS names (endpoint), in the Hostname, replace the leading “*“ with “bucket“.
Example: bucket.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com
Important: When creating a new volume using the PrivateLink Cloud Credential, the new volume must be created in the same region as the PrivateLink. Volumes cannot be accessed using a PrivateLink in a different region.
Important: If you change an existing Cloud Credential, and that Cloud Credential uses PrivateLink, then all volumes using that Cloud Credential should be in the same AWS region as the PrivateLink.
Important: VPC policies can exclusively route all traffic to the bucket over PrivateLink. Before implementing such VPC policies, ensure that all Edge Appliances are using the PrivateLink.
Known Restrictions
Because PrivateLinks are region-specific, PrivateLinks cannot be used if an Edge Appliance’s Cloud Credential is used to access Nasuni volumes in different regions.
FIPS Endpoints
A FIPS endpoint encrypts all data in transit using cryptographic standards that comply with Federal Information Processing Standard (FIPS) 140-2. Use FIPS endpoints to comply with FIPS requirements or to enhance your data security posture.
Note: New Cloud Credentials with the FIPS endpoint as the hostname are validated against the endpoint's region.
Limited regional availability
FIPS Endpoints are available in limited regions in the USA and Canada. The list of support regions and their corresponding FIPS endpoints is here: Federal Information Processing Standard (FIPS).
Version Compatibility
NEA version 9.15 and later, and NMC version 24.1 and later, support FIPS endpoints.
To use FIPS endpoints for a shared volume, all connected Edge Appliances must be on NEA version 9.15 or later.
Only Edge Appliances running version 9.15 or later can connect to a volume using a FIPS endpoint.
Enabling FIPS Endpoints
To enable FIPS Endpoints, follow these steps:
When creating a new Cloud Credential, or changing an existing Cloud Credential, use the FIPS endpoint as the Hostname.
Important: When creating a new volume using the FIPS endpoint Cloud Credential, the new volume should be created in the same region as the FIPS endpoint. Volumes cannot be accessed using a FIPS endpoint in a different region.
Important: When updating an existing cloud credential, the FIPS endpoint and all connected volumes using the credential should be in the same region. After updating existing credentials, take a snapshot from the Edge Appliance using the updated Cloud Credentials.
Known Restrictions
Because FIPS endpoints are region-specific, they cannot be used If an Edge Appliance’s Cloud Credential is used to access Nasuni volumes in different regions.
S3 Transfer Acceleration(S3TA)
S3 Transfer Acceleration enables fast transfer of data from a Nasuni Edge Appliance to an S3 bucket by taking advantage of Amazon CloudFront's edge locations. Data enters AWS via the nearest CloudFront edge location and is then routed to the S3 bucket via Amazon's ultra-fast internal network. A more detailed description is available here: Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration.
According to AWS, customers can expect a 50-500 percent speed improvement. Amazon provides a tool to show the benefits of S3TA from a specific customer location.
Note: The boost to the transfer rate depends on the Edge Appliance's available bandwidth and QoS settings.
Cost
Pricing is based on the AWS edge location used to accelerate your transfer. S3 Transfer Acceleration pricing is in addition to Data Transfer pricing.
Limited Regional Availability
S3 Transfer Acceleration is not supported in all AWS regions. A list of supported regions is here: Requirements for using Transfer Acceleration. The S3 buckets should be in one of these supported regions.
Version Compatibility
NEA versions 9.15 and later, and NMC versions 24.1 and later, support S3 Transfer Acceleration.
Enabling S3 Transfer Acceleration
To enable S3 Transfer Acceleration, follow these steps:
Identify the volume and connected Edge Appliances that need S3 Transfer Acceleration.
From the customer’s AWS Console, enable S3 Transfer Acceleration on the volume's corresponding AWS bucket.
Detailed instructions are here: Enabling and using S3 Transfer Acceleration.
The NMC-Volume Details page shows the corresponding bucket name.From the NMC, go to the Account Status Cloud Credentials page. For the intended Edge Appliances, update the hostname to
s3-accelerate.amazonaws.com.Verify that all Edge Appliances can still read and write from the volume.
S3TA Best Practices
Maintain an Edge Appliance within the region of the data:
Use this Edge Appliance to create volumes.
Share the volumes created with all other Edge Appliances, as necessary.
Maintain the standard S3 URL on the Edge Appliance that owns the volume.
Change the URL to s3-accelerate.amazonaws.com for the credentials of all other intended Edge Appliances.
Switching back to the public S3 Endpoint
Customers can switch back to accessing data from their respective S3 bucket using the public endpoint (s3.amazonaws.com) by following these steps:
Review your S3 bucket policies to ensure that the S3 bucket has public connectivity.
Update the hostname on the intended Cloud Credentials.
Verify that all Edge Appliances can still read from and write to the volume using the updated Cloud Credentials.
Appendix: Creating Amazon S3 (Simple Storage Service) User Credentials
Important: You must have created an Amazon S3 (Simple Storage Service) account. See http://aws.amazon.com/s3/.
Important: Confirm that your Nasuni account is configured for supplying your own Amazon S3 credentials. On the Cloud Credentials page, “Amazon S3” or “Amazon S3 GovCloud” should be among the list of available Cloud Credential providers. If neither of those is present, contact Sales or Nasuni Support to enable one or more in your license.
To create Amazon Simple Storage Service credentials, follow these steps:
Log in to the AWS Management Console at https://console.aws.amazon.com. The AWS Console Home page appears.
In the upper left corner, click Services.
Drop-down lists of services appear. Click All services. A drop-down list of all services appears.
From the list of services, click IAM (Identity and Access Management). The “IAM Dashboard” screen appears.
On the left side, click Users. A list of defined users appears.
In the upper right corner, click “Add users”. The “Create user/Specify user details” page appears.
Enter the new User name.
Do not select “Provide user access to the AWS Management Console.”Note: If you are creating programmatic access through access keys or service-specific credentials for AWS CodeCommit or Amazon Keyspaces, you can generate them after you create this IAM user.
Click Next.
The “Set permissions” page appears.It is recommended to add users to an existing group or to create a group to add users to. (Alternatively, you can click “Attach existing policies directly”, then, from the list, select AmazonS3FullAccess.)
Follow these steps:Select “Add user to group”.
On the right side, click “Create group”. The “Create user group” page appears.
Enter a “User group name”.
In the “Permissions policies” area, search for the permissions policy “AmazonS3FullAccess”, then select that permissions policy.
Click “Create user group”.
The defined user group is created and appears in the list of user groups.
Back on the “Set permissions” page, select the group that you just created for this user.
Click “Next”. The “Review and create” page appears.
Review all entries.
To add any tags, click “Add new tag” and create tags.
To change any entry, click Previous.If ready to continue, click Create user.
The specified user name is created. The “User created successfully” message appears.Click “View user”. The user detail page appears.
In the middle of the page, click the “Security credentials” tab. The “Security credentials” tab appears.
Create an access key for this user by following these steps.
Click “Create access key”. The “Access key best practices & alternatives” page appears.
Select “Application running outside AWS” and click Next.
The “Set descriptive tag” page appears.Add a description of the purpose of this access key, then click “Create access key”.
The access key is created.The “Retrieve access keys” page appears.
Important: You MUST record or download the access key and the secret access key now. This is the only opportunity you have to record or download the access key and the secret access key.
Save the access key and the secret access key in at least one of the following ways:
Click “Show” and carefully record both the access key and the secret access key.
Click “Download .csv file” and save both the access key and the secret access key on your computer in a .csv file.
Click Done.
This completes the Amazon Simple Storage Service user credential procedure.
Appendix: Configuring full access (for Nasuni-S3-Users) (Optional)
For full access (Nasuni-S3-users), follow these steps:
Log in to the AWS Management Console at https://console.aws.amazon.com. The AWS Console Home page appears.
Click IAM. The Identity and Access Management (IAM) page appears.
Click User groups. The User groups pane appears.
Create a group for Nasuni S3 access only. This group should only be used for Nasuni S3 users.
Click Create group. The Create user group pane appears.Enter a User group name and press Enter.
The new user group name appears in the list.
Click the new user group name.The “Users in this group” pane appears.
To add IAM users to this group, click Add users. The Add users pane appears.
A list of known users appears. Select any listed users that you want to add to this group, then click Add users. Users are added to the group.If it is not selected, select the Permissions tab.
From the Add permissions drop-down list, select Attach policies.Search for the AmazonS3fullAccess permission policy, then select it and click Add permissions. The Set permissions pane appears.
Assign the user to the Nasuni S3 access only group.
Click Next: Tags. The Add tags pane appears.
Click Next: Review. The Review pane appears.
Click Create user. The new user is created and appears in the list of users.Click the new user in the list, then click the Security credentials tab.
Verify that the Console password is Disabled.
Appendix: Creating a lifecycle policy (Optional)
To create a lifecycle policy, follow these steps:
Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
From the Bucket name list, click the name of the bucket that you want to create a lifecycle policy for.
Click the Management tab, and then click Add lifecycle rule.
The Lifecycle rule dialog box appears.
In the “Enter a rule name” text box, type a name for your rule to help identify the rule later. The name must be unique within the bucket.
Leave “Add filter to limit scope to prefix/tags” blank. This is not supported by Nasuni.
To apply this lifecycle rule to all objects in the bucket, click Next.
The Transitions screen appears.
You configure lifecycle rules by defining rules for transitioning objects to the Standard-IA, Intelligent-Tiering, One Zone-IA, and Glacier Instant Retrieval storage classes. For more information, see Storage Classes in the Amazon Simple Storage Service Developer Guide.To define transitions that are applied to the current version of the object, select Current version.
Leave Previous versions blank. This is not supported by Nasuni.
Click Add transitions and specify one of the following transitions:
- Click Transition to Glacier Instant Retrieval after, and then type the number of days after the creation of an object that you want the transition to be applied (for example, 30 days).
- Click Transition to Standard-IA after, and then type the number of days after the creation of an object that you want the transition to be applied (for example, 30 days).
- Click Transition to Intelligent_Tiering, and then type the number of days after the creation of an object that you want the transition to be applied (for example, 30 days).
- Click Transition to One Zone-IA after, and then type the number of days after the creation of an object that you want the transition to be applied (for example, 30 days).
- Do not click Transition to Glacier Deep Archive after. This is not supported by Nasuni.
For more information, see Transitioning Objects Using Amazon S3 Lifecycle.
When you are done configuring transitions, click Next.
The Expiration screen appears.Expiration is not supported by Nasuni. Do not select any options. Click Next.
The Review screen appears.Verify the settings for your rule. To make changes, click Previous. Otherwise, click Save.
If the rule does not contain any errors, it is listed on the Lifecycle page and is enabled.