S3 + Rails

Update: This has mostly been superseded by my full-on s33r project, hosted on RubyForge. It contains a Rails application as an example, but far more functionality than this early effort. I'd go so far as to say it outstrips the Amazon Ruby sample code for S3, as it provides object wrappers around the bucket listing, logging and acl functionality, making it much easier to utilise them than it is with the Amazon samples. Give it a try, why don't you?

Well, I spent most of today obsessively coding a simple Rails front-end to S3 called s33r (pronounced "seer"). It's very incomplete, but mostly intended as a proof of concept. It allows you to perform the following operations:
  • Create and delete "buckets"
  • Browse a list of keys in each bucket
  • Add resources to buckets, either by uploading files (s33r will guess the content type) or entering text
  • Delete resources from buckets
  • Make buckets and resources either private or public

There's no support for resource prefixes, so the storage is fairly flat at the moment.

It incorporates the S3 sample Ruby code from Amazon, the HMAC-SHA1 library, and the MIME::Types libraries. I haven't included the licences, but believe they are all under MIT. My S3 code is a Rails plugin in vendor/plugins/S3Client. The Rails code is hastily hacked together and badly organised, but I got carried away. I haven't frozen the Rails gems in, but for reference I used Rails 1.0.

If you want to try installing it, the package is attached to the end of this blog entry. Untar and install as you would any other Rails app.. You will need to fix the shebang lines (they point at one of my many custom Rails environments!). Then you will need an S3 account. Once you have this, edit config/s3_config.yaml with your AWS access key details. Change bucket_prefix to something sensible for you. Then start it up (WEBrick or Lighttpd), browse to localhost (port 3333 if you use the included Lighttpd config.), and away you go!

There is a SQLite database attached to the app., but it isn't used at present. Not sure if this will break if you try to use the package without SQLite being installed. I tested under Lighttpd, and there's a Lighttpd config. file in the config. directory if you want to use that.

The whole thing will be released under the MIT Licence eventually, and I will put it up on RubyForge once I get a minute. But I was too excited to wait before releasing it :) Any feedback would be great.


This software comes with no warranty. I am happy to answer informal queries, though.


any updates

Elliot this looks pretty useful are there any newer version of this you know of?

I'm a little late but

I'm a little late but appreciate this release. (Even though its old, I use it on some old systems.)


Hi Elliot,

Thanks for the latest release. However, I have been having problem getting fores33r to work at all, due to a "Unable to connect" error. Tried freezing it with edge but still didn't work. FYI, I didn't have any problem running your previous release, the s33r app.

Any suggestions?


Hello again Marv. Have you

Hello again Marv. Have you copied the file examples/s3.yaml into examples/fores33r/config/? You will then need to edit the AWS connection keys in that file. Let me know whether this fixes your issue.

Hi Elliot, I have

Hi Elliot,

I have uninstalled all the old versions of s33r, and downloaded the 0.5.1 version.

I did a SVN CHECKOUT and saw the changes you have made to utility.rb:
- require File.join(base, 's33r_exception')
- include S3Exception

Unfortunately, I am still getting an "unable to connect" error with fores33r.

And simple.rb still gives me the error message:

./../../lib/s33r/bucket.rb:7: uninitialized constant S33r::Client (NameError)
from ./../../lib/s33r.rb:2
from ./../../lib/s33r.rb:2

I am starting to wonder if something is wrong on my end. I am using: rails 1.1.6. and I am running your older s33r rails app without any problem...



Hello again Marv. I've done

Hello again Marv. I've done some fairly major updates to how libraries are loaded and how namespaces get included. I have to be honest and say this is an area of some confusion for me, but I've done my best to tidy up. Could you try again from the Subversion repository and see if this fixes things for you? I will aim to write some tests for the HTTP access parts of s33r soon, to help me track down these problems.

Elliot, Well done!!!


Well done!!! Whatever changes you have made, they work! Your latest version of fores33r now loads wonderfully on my browser window!!!

I can now:
- create buckets
- delete buckets
- upload resources
- download resources
- delete resources

Well done!!!

Thanks for all your help,


You're welcome Marv. Your

You're welcome Marv. Your problems prompted me to go through and methodically fix all the library imports and namespaces. It should now work much better, and the code is far cleaner, so thanks for your input, and for sticking with me while I fixed it. As this was a big rewrite of the old code, I am still in the process of fixing parts of it, but hopefully I'll get there before too long. Let me have your website details and I'll put them in the credits file for s33r.

Thanks for your intention of

Thanks for your intention of crediting me for the very minimal testing that I did. The credit is really all yours!!! :)
I am just a newbie! Also, I am not using a website, I am only using my local computer to run and get a feel for different scripts.

Take care,


Hello Elliot, Yes, I did add

Hello Elliot,

Yes, I did add my access keys to the s3.yaml file and copy it into the config folder. Still no response unfortunately...


More info... I have also

More info...

I have also tried using "simple.rb" and the error message I am getting is shown below. Maybe this can shed some light into the problem I am getting. Also, the example scripts prompted me to install: Libxml-Ruby, which I did.

The error message for simple.rb:
./../../lib/s33r/utility.rb:77: uninitialized constant S33r::S3Exception (NameError)
from ./../../lib/s33r/bucket.rb:2
from ./../../lib/s33r.rb:2
from ./../../lib/s33r.rb:2
from simple.rb:6

Thanks for your help,


Hello Marv. Thanks for the

Hello Marv. Thanks for the extra info. A couple of things to try:

  • Make sure you only have one version of s33r installed. I don't think this matters, but could help to isolate the issue.
  • I've added an explicit require for the file containing the S3Exception module to utility.rb. This might help - I'll upgrade the gem once Rubyforge is up.

I've just looked at this again, and I think it's a problem with my code. I think S33r is trying to throw an error (so there is a problem somewhere), but the real error you're getting is actually causing another error due to faulty namespaces. I've gone through the code and think I've fixed this now.

AWS credentials problem

Hello Elliot,

I am trying to install your Rails app for AWS S3, but I keep on getting a connection problem:

AWS credentials do not allow connection to S3

#{RAILS_ROOT}/vendor/plugins/S3Client/lib/client.rb:54:in `init'

I signed up on Amazon Web Services, got their confirmation email, listing my two access keys, which I entered in

By the way, I am running Rails 1.1.

Any suggestion would be appreciated! Thank you!

Please disregard my previous

Please disregard my previous post. I was improperly signed up with Amazon... It is working now, and your Rails app is great! Thanks a bunch! :)

Excellent. It's quite

Excellent. It's quite primitive in many ways, but it does the job for me.

Error in your client.

The bucket does NOT need to be public in order to have the resource be public... This allows you to have publicly accessible files without letting people list the entire contents of the directory..

To enable this (proper) behaviour, just comment out line 247 of client.rb

Thanks for the comment

Thanks for the comment Aaron. Which version of the code are you referring to (the line number doesn't seem to correspond to the repository version)? Are you talking about the "make_public" method? I seem to recall that when I made a bucket private it hid the contents from public view, but I was probably mistaken.

I think i just grabbed the

I think i just grabbed the .gz linked to in the blog post.

I was indeed talking about the make_public method.

Making a bucket private hides the public listing of contents but not the files themselves. This lets you have security through obscurity.

Thanks Aaron. This is quite

Thanks Aaron. This is quite an old version, and I've updated it considerably since. I now host it on http://rubyforge.org/projects/s33r/, where you can get a gem version. I think this version will probably fix the bug you correctly identified. The latest versions support ACLs better and provide an object layer over the XML, so you can pretty much do what you like with them.

Ah, Great!


Maybe you should edit the parent post?

Good point! I will.

Good point! I will.


Thanks for the plugin and code mate, works great and is helping me a great deal with S3 and Rails.


I love it when people enjoy my code. Any suggestions for improvements, or any work you'd like to fold in, let me know. Perhaps I should get it up on SourceForge and open it up a bit.

Glad it's useful

Dear Paul,

Glad you found it useful. I have to confess, it's a bit of an orphan at the moment, as I started rewriting it and haven't yet completed the work. I do plan to do it eventually, but other work has cropped up to rob me of my spare time.

As far as the content_type goes, I put a simple accessor method in for this as an extension to the S3::Response class, but didn't set it when the client gets a resource back from S3. However, the S3::Response is just a subclass of Net::HTTPResponse class anyway, so it should be possible to dig into that to pull out the Content-Type HTTP header. I'm guessing, but it should be possible by doing something like:

content_type = response['Content-Type']

It's definitely something I will sort in my new implementation.


Nice work on this. I am using it to support file uploads to my app and doing away with my old file system implementation. So far it works flawlessly. I already had a database set up in MySQL that stored pointers to the attachment files and without much modification I was able to integrate your plugin code. I noticed some issues with the content_type method on the Response, however. It doesn't affect my implementation but I was wondering if this was just stubbed out for later?