I have to store data using a Perl CGI script with an Apache 2.x Ubuntu server. I wish to access-lock that data to become only RW use of Apache (or even the Perl script), and not be accessible from an URL.

One option would be to set up a database. Let me avoid that dependency and keep data in unique filenames (this method works best for this application).

However, I am unfamiliar with a great way to configure this.

What's the configuration needed to setup a 'sandbox' directory for file IO for any webserver, without permitting an Hyperlink to that directory?

Put the directory outdoors from the document root (and outdoors of sites aliased in to the document root or any sites planned via mod_rewrite). Apache and also the Perl script will have the ability to read/email this location (because of the full road to the place with correct file permissions, obviously), but it won't be accessible using a URL.

There's an essential caveat! Files outdoors from the document root aren't directly accessible using a URL, but when the Perl script is exhibiting content in the files then you've the typical security concerns of XSS and knowledge disclosure. You'll should also give consideration to how filename "originality" or lack thereof impacts this design, e.g. a person might attempt to deliberately overwrite personal files or guess file names.