Compare commits

...

61 Commits

Author SHA1 Message Date
Roberto Cirillo 2e4e709ff8 restore source to 1.8 2022-09-19 10:30:29 +02:00
Roberto Cirillo 89524aac9c set source jdk to 11 2022-09-19 10:28:15 +02:00
Roberto Cirillo c252d1c5ae removing jvm source tag 2022-09-19 10:26:36 +02:00
Roberto Cirillo ae6740cfbb set java version 2022-09-19 10:22:43 +02:00
Roberto Cirillo ac95563480 update to version 2.9.2 2022-09-07 17:25:49 +02:00
Roberto Cirillo 904bf0bc17 restored close method 2022-09-07 17:10:08 +02:00
Roberto Cirillo 83bc9fbc2e update to release version 2.9.1 2022-06-28 14:15:48 +02:00
Roberto Cirillo bbaf866f43 update CHANGELOG 2022-06-28 14:12:42 +02:00
Roberto Cirillo 7d2be48d31 update to 2.9.1-SNAPSHOT version in order to have a fixed bom as latest in the range. see #23578 2022-06-28 14:07:50 +02:00
Roberto Cirillo 588a71db5e Revert "Update 'pom.xml'"
This reverts commit 7194811366.
2022-06-28 14:04:30 +02:00
Roberto Cirillo db61504f66 Update 'CHANGELOG.md'
removed SNAPSHOT
2021-10-08 09:31:37 +02:00
Roberto Cirillo e60fcbfdac Update 'pom.xml'
removed SNAPSHOT
2021-10-08 09:31:18 +02:00
Roberto Cirillo e6ca5d25b4 Update 'pom.xml'
moved to snapshot
2021-10-08 09:26:25 +02:00
Roberto Cirillo 50dcb2f2bd Update 'CHANGELOG.md'
moved to snapshot
2021-10-08 09:26:03 +02:00
Roberto Cirillo c387a38fdf removed unused imports;
deleted main test
2021-10-07 15:43:06 +02:00
Roberto Cirillo 15a4909d7c bug fix 22164; clean code. 2021-10-07 15:11:55 +02:00
Roberto Cirillo 5a644f79a0 removed SNAPSHOT from version 2021-10-07 10:22:19 +02:00
Roberto Cirillo 505346fac3 moved from 2.13.1 to 3.0.0-SNAPSHOT version 2021-10-07 09:38:06 +02:00
Roberto Cirillo a4abfbad5b update CHANGELOG 2021-09-10 11:24:25 +02:00
Roberto Cirillo 387efa6db5 update to version 2.13.1 2021-09-10 11:22:49 +02:00
Roberto Cirillo de82a56a72 copy: if source and dest have the same id, return 2021-09-10 09:44:05 +02:00
Roberto Cirillo 0472efaf36 add some comments 2021-09-09 15:58:57 +02:00
Roberto Cirillo 745323eb63 add some comments 2021-09-09 15:18:26 +02:00
Roberto Cirillo 3979da000f fix compare sourceId with destId 2021-09-09 12:53:22 +02:00
Roberto Cirillo 20e3993af2 throw exception if an object is a valid id and it is not present on backend after the retry mechanism 2021-09-09 12:25:13 +02:00
Roberto Cirillo af016382fe fix wrong print 2021-09-09 11:52:31 +02:00
Roberto Cirillo e7cd080da7 add another check for understand if the source and destination are the same during a copy operation 2021-09-09 11:24:30 +02:00
Roberto Cirillo 83bb788057 update to version 2.13.1-SNAPSHOT 2021-09-09 11:07:20 +02:00
Roberto Cirillo ea22672588 add a little sleep in order to understand if it solve the issue 21980 2021-09-09 11:03:44 +02:00
Roberto Cirillo 77048b747e add some files to gitignore 2021-09-03 11:57:25 +02:00
Roberto Cirillo 81b80f1748 add getgCubeMemoryType method to the IClient interface 2021-09-03 10:36:06 +02:00
Roberto Cirillo b12e16eb4b renamed class RequestObject to MyFile in order to preserve backward compatibility 2021-09-02 14:48:35 +02:00
Roberto Cirillo c0cc09fd5e renamed class RequestObject to MyFile in order to preserve backward compatibility 2021-09-02 14:48:17 +02:00
Roberto Cirillo 4f8a65e348 clean mongoobjects into forceClose operation 2021-08-06 14:39:24 +02:00
Roberto Cirillo 46e20d5f6e Merge branch 'master' of
https://code-repo.d4science.org/gCubeSystem/storage-manager-core.git

Conflicts:
	CHANGELOG.md
2021-08-04 09:44:40 +02:00
Roberto Cirillo 7965fa019f add forceClose operation. Update to version 2.13.0-SNAPSHOT 2021-08-04 09:43:10 +02:00
Roberto Cirillo 49f7ba84b2 Update 'CHANGELOG.md'
fix changelog sintax
2021-07-15 15:48:46 +02:00
Roberto Cirillo ee531ec7ef using tm instance on transportManagerFactory in order to consider that a transport should be linked to VOLATILE or PERSISTENT memory 2021-06-18 12:04:41 +02:00
Roberto Cirillo 763b30fa04 fix merge conflicts 2021-05-14 17:06:24 +02:00
Roberto Cirillo 1ff761db9c Resolved merge conflict 2021-05-14 16:47:59 +02:00
Roberto Cirillo 6aabf6729d Resolved merge conflict 2021-05-14 16:44:47 +02:00
Roberto Cirillo 4835adda16 set pom to 2.9.0-SNAP 2021-05-14 16:25:25 +02:00
Roberto Cirillo ab9dfcab66 delegate the transportManager check to TransportManagerFactory class 2021-05-13 18:06:40 +02:00
Roberto Cirillo 77ec6925c7 remove transportLayer check from Operation class 2021-05-13 16:15:55 +02:00
Roberto Cirillo 52a6cf8641 added debug log on TransportManagerFactory 2021-05-13 16:15:18 +02:00
Roberto Cirillo 60b9ebaa93 convert BasicDBObject to DBObject the return type used for metadata collections 2021-05-13 15:24:32 +02:00
Roberto Cirillo 88fe3747f6 update to version 2.12.1-SNAPSHOT #21390 2021-05-13 11:57:05 +02:00
Roberto Cirillo eb74e06cdf 2.12.0-SNAP: static operation class, bypassed mongo close 2021-04-01 10:15:03 +02:00
Roberto Cirillo add9810644 update mongo-java-driver to 3.12.0; upgrade to 2.11.0-SNAPHOt version 2021-03-17 09:57:41 +01:00
Roberto Cirillo 53a52fdc31 removed close method for mongo client. Now the connection pool is
managed by java driver, upgrade mongo-java-driver to 3.12. Deprecated
getUrl method
2021-03-12 17:24:49 +01:00
Roberto Cirillo 2f2ddfad4a upgrade component version to 3.1.0-SNAPSHOT: upgrade mongo-java-driver
to 3.12.0 version
2021-03-11 15:54:05 +01:00
Roberto Cirillo 094484fcf6 deprecated Http methods used for returning http url
update pom to version 3.0.1-SNAPSHOT
2021-02-25 15:30:22 +01:00
Roberto Cirillo a731b29c0d removed commented lines, added 3.0.0-SNAPSHOT entry to the changelog 2021-02-16 10:55:07 +01:00
roberto cirillo e5adc54456 upgrade to version 3.0.0-SNAPSHOT 2021-01-08 17:07:15 +01:00
roberto cirillo e0a11206b7 added token and region parameters.
refactoring code
2020-12-22 17:57:44 +01:00
roberto cirillo a4532fcacd set to static the TransportManager field defined into Operation class.
In this way the backend used is always the same
2020-11-19 15:15:09 +01:00
roberto cirillo 1a7d79127b refactoring Operation class. Created new method getTransport 2020-11-12 18:11:07 +01:00
roberto cirillo a3619dc643 added missing class TransportManager 2020-11-09 16:14:24 +01:00
roberto cirillo c3fde07fc8 update to version 2.10.0-SNAPSHOT
added new input parameter to getSize method, for compatibility with s3
client plugin
2020-11-09 16:13:18 +01:00
rcirillo-pc abad3acc75 removed distroDirectory property 2020-09-23 17:41:37 +02:00
rcirillo-pc a5cda7c602 added changelog, licence and readme files 2020-09-23 17:40:43 +02:00
66 changed files with 824 additions and 412 deletions

View File

@ -15,14 +15,10 @@
<attributes>
<attribute name="optional" value="true"/>
<attribute name="maven.pomderived" value="true"/>
<attribute name="test" value="true"/>
</attributes>
</classpathentry>
<classpathentry excluding="**" kind="src" output="target/test-classes" path="src/test/resources">
<attributes>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.7">
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.8">
<attributes>
<attribute name="maven.pomderived" value="true"/>
</attributes>
@ -30,6 +26,7 @@
<classpathentry kind="con" path="org.eclipse.m2e.MAVEN2_CLASSPATH_CONTAINER">
<attributes>
<attribute name="maven.pomderived" value="true"/>
<attribute name="org.eclipse.jst.component.nondependency" value=""/>
</attributes>
</classpathentry>
<classpathentry kind="output" path="target/classes"/>

4
.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
/target/
/.classpath
/*.project
/.settings

View File

@ -5,6 +5,11 @@
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.wst.common.project.facet.core.builder</name>
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments>
@ -15,9 +20,17 @@
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>org.eclipse.wst.validation.validationbuilder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jem.workbench.JavaEMFNature</nature>
<nature>org.eclipse.wst.common.modulecore.ModuleCoreNature</nature>
<nature>org.eclipse.jdt.core.javanature</nature>
<nature>org.eclipse.m2e.core.maven2Nature</nature>
<nature>org.eclipse.wst.common.project.facet.core.nature</nature>
</natures>
</projectDescription>

View File

@ -1,12 +1,15 @@
eclipse.preferences.version=1
org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode=enabled
org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.7
org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.8
org.eclipse.jdt.core.compiler.codegen.unusedLocal=preserve
org.eclipse.jdt.core.compiler.compliance=1.7
org.eclipse.jdt.core.compiler.compliance=1.8
org.eclipse.jdt.core.compiler.debug.lineNumber=generate
org.eclipse.jdt.core.compiler.debug.localVariable=generate
org.eclipse.jdt.core.compiler.debug.sourceFile=generate
org.eclipse.jdt.core.compiler.problem.assertIdentifier=error
org.eclipse.jdt.core.compiler.problem.enablePreviewFeatures=disabled
org.eclipse.jdt.core.compiler.problem.enumIdentifier=error
org.eclipse.jdt.core.compiler.problem.forbiddenReference=warning
org.eclipse.jdt.core.compiler.source=1.7
org.eclipse.jdt.core.compiler.problem.reportPreviewFeatures=ignore
org.eclipse.jdt.core.compiler.release=disabled
org.eclipse.jdt.core.compiler.source=1.8

View File

@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?><project-modules id="moduleCoreId" project-version="1.5.0">
<wb-module deploy-name="storage-manager-core">
<wb-resource deploy-path="/" source-path="/src/main/java"/>
<wb-resource deploy-path="/" source-path="/src/main/resources"/>
</wb-module>
</project-modules>

View File

@ -0,0 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<faceted-project>
<installed facet="java" version="1.8"/>
<installed facet="jst.utility" version="1.0"/>
</faceted-project>

16
CHANGELOG.md Normal file
View File

@ -0,0 +1,16 @@
# Changelog for storage-manager-core
## [v2.9.3-SNAPSHOT] 2022-09-19
* set java to 1.8
## [v2.9.2] 2022-09-07
* restored close() method to IClient
* add slf4j-simple dependency with test scope
* update gcube-bom to 2.0.2
## [v2.9.1] 2022-06-28
* update to version 2.9.1 in order to have a fixed bom in the latest version of the range
## [v2.9.0] 2019-10-19
* SSL enabled

311
LICENSE.md Normal file
View File

@ -0,0 +1,311 @@
# European Union Public Licence V. 1.1
EUPL © the European Community 2007
This European Union Public Licence (the “EUPL”) applies to the Work or Software
(as defined below) which is provided under the terms of this Licence. Any use of
the Work, other than as authorised under this Licence is prohibited (to the
extent such use is covered by a right of the copyright holder of the Work).
The Original Work is provided under the terms of this Licence when the Licensor
(as defined below) has placed the following notice immediately following the
copyright notice for the Original Work:
Licensed under the EUPL V.1.1
or has expressed by any other mean his willingness to license under the EUPL.
## 1. Definitions
In this Licence, the following terms have the following meaning:
- The Licence: this Licence.
- The Original Work or the Software: the software distributed and/or
communicated by the Licensor under this Licence, available as Source Code and
also as Executable Code as the case may be.
- Derivative Works: the works or software that could be created by the Licensee,
based upon the Original Work or modifications thereof. This Licence does not
define the extent of modification or dependence on the Original Work required
in order to classify a work as a Derivative Work; this extent is determined by
copyright law applicable in the country mentioned in Article 15.
- The Work: the Original Work and/or its Derivative Works.
- The Source Code: the human-readable form of the Work which is the most
convenient for people to study and modify.
- The Executable Code: any code which has generally been compiled and which is
meant to be interpreted by a computer as a program.
- The Licensor: the natural or legal person that distributes and/or communicates
the Work under the Licence.
- Contributor(s): any natural or legal person who modifies the Work under the
Licence, or otherwise contributes to the creation of a Derivative Work.
- The Licensee or “You”: any natural or legal person who makes any usage of the
Software under the terms of the Licence.
- Distribution and/or Communication: any act of selling, giving, lending,
renting, distributing, communicating, transmitting, or otherwise making
available, on-line or off-line, copies of the Work or providing access to its
essential functionalities at the disposal of any other natural or legal
person.
## 2. Scope of the rights granted by the Licence
The Licensor hereby grants You a world-wide, royalty-free, non-exclusive,
sub-licensable licence to do the following, for the duration of copyright vested
in the Original Work:
- use the Work in any circumstance and for all usage, reproduce the Work, modify
- the Original Work, and make Derivative Works based upon the Work, communicate
- to the public, including the right to make available or display the Work or
- copies thereof to the public and perform publicly, as the case may be, the
- Work, distribute the Work or copies thereof, lend and rent the Work or copies
- thereof, sub-license rights in the Work or copies thereof.
Those rights can be exercised on any media, supports and formats, whether now
known or later invented, as far as the applicable law permits so.
In the countries where moral rights apply, the Licensor waives his right to
exercise his moral right to the extent allowed by law in order to make effective
the licence of the economic rights here above listed.
The Licensor grants to the Licensee royalty-free, non exclusive usage rights to
any patents held by the Licensor, to the extent necessary to make use of the
rights granted on the Work under this Licence.
## 3. Communication of the Source Code
The Licensor may provide the Work either in its Source Code form, or as
Executable Code. If the Work is provided as Executable Code, the Licensor
provides in addition a machine-readable copy of the Source Code of the Work
along with each copy of the Work that the Licensor distributes or indicates, in
a notice following the copyright notice attached to the Work, a repository where
the Source Code is easily and freely accessible for as long as the Licensor
continues to distribute and/or communicate the Work.
## 4. Limitations on copyright
Nothing in this Licence is intended to deprive the Licensee of the benefits from
any exception or limitation to the exclusive rights of the rights owners in the
Original Work or Software, of the exhaustion of those rights or of other
applicable limitations thereto.
## 5. Obligations of the Licensee
The grant of the rights mentioned above is subject to some restrictions and
obligations imposed on the Licensee. Those obligations are the following:
Attribution right: the Licensee shall keep intact all copyright, patent or
trademarks notices and all notices that refer to the Licence and to the
disclaimer of warranties. The Licensee must include a copy of such notices and a
copy of the Licence with every copy of the Work he/she distributes and/or
communicates. The Licensee must cause any Derivative Work to carry prominent
notices stating that the Work has been modified and the date of modification.
Copyleft clause: If the Licensee distributes and/or communicates copies of the
Original Works or Derivative Works based upon the Original Work, this
Distribution and/or Communication will be done under the terms of this Licence
or of a later version of this Licence unless the Original Work is expressly
distributed only under this version of the Licence. The Licensee (becoming
Licensor) cannot offer or impose any additional terms or conditions on the Work
or Derivative Work that alter or restrict the terms of the Licence.
Compatibility clause: If the Licensee Distributes and/or Communicates Derivative
Works or copies thereof based upon both the Original Work and another work
licensed under a Compatible Licence, this Distribution and/or Communication can
be done under the terms of this Compatible Licence. For the sake of this clause,
“Compatible Licence” refers to the licences listed in the appendix attached to
this Licence. Should the Licensees obligations under the Compatible Licence
conflict with his/her obligations under this Licence, the obligations of the
Compatible Licence shall prevail.
Provision of Source Code: When distributing and/or communicating copies of the
Work, the Licensee will provide a machine-readable copy of the Source Code or
indicate a repository where this Source will be easily and freely available for
as long as the Licensee continues to distribute and/or communicate the Work.
Legal Protection: This Licence does not grant permission to use the trade names,
trademarks, service marks, or names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the copyright notice.
## 6. Chain of Authorship
The original Licensor warrants that the copyright in the Original Work granted
hereunder is owned by him/her or licensed to him/her and that he/she has the
power and authority to grant the Licence.
Each Contributor warrants that the copyright in the modifications he/she brings
to the Work are owned by him/her or licensed to him/her and that he/she has the
power and authority to grant the Licence.
Each time You accept the Licence, the original Licensor and subsequent
Contributors grant You a licence to their contributions to the Work, under the
terms of this Licence.
## 7. Disclaimer of Warranty
The Work is a work in progress, which is continuously improved by numerous
contributors. It is not a finished work and may therefore contain defects or
“bugs” inherent to this type of software development.
For the above reason, the Work is provided under the Licence on an “as is” basis
and without warranties of any kind concerning the Work, including without
limitation merchantability, fitness for a particular purpose, absence of defects
or errors, accuracy, non-infringement of intellectual property rights other than
copyright as stated in Article 6 of this Licence.
This disclaimer of warranty is an essential part of the Licence and a condition
for the grant of any rights to the Work.
## 8. Disclaimer of Liability
Except in the cases of wilful misconduct or damages directly caused to natural
persons, the Licensor will in no event be liable for any direct or indirect,
material or moral, damages of any kind, arising out of the Licence or of the use
of the Work, including without limitation, damages for loss of goodwill, work
stoppage, computer failure or malfunction, loss of data or any commercial
damage, even if the Licensor has been advised of the possibility of such
damage. However, the Licensor will be liable under statutory product liability
laws as far such laws apply to the Work.
## 9. Additional agreements
While distributing the Original Work or Derivative Works, You may choose to
conclude an additional agreement to offer, and charge a fee for, acceptance of
support, warranty, indemnity, or other liability obligations and/or services
consistent with this Licence. However, in accepting such obligations, You may
act only on your own behalf and on your sole responsibility, not on behalf of
the original Licensor or any other Contributor, and only if You agree to
indemnify, defend, and hold each Contributor harmless for any liability incurred
by, or claims asserted against such Contributor by the fact You have accepted
any such warranty or additional liability.
## 10. Acceptance of the Licence
The provisions of this Licence can be accepted by clicking on an icon “I agree”
placed under the bottom of a window displaying the text of this Licence or by
affirming consent in any other similar way, in accordance with the rules of
applicable law. Clicking on that icon indicates your clear and irrevocable
acceptance of this Licence and all of its terms and conditions.
Similarly, you irrevocably accept this Licence and all of its terms and
conditions by exercising any rights granted to You by Article 2 of this Licence,
such as the use of the Work, the creation by You of a Derivative Work or the
Distribution and/or Communication by You of the Work or copies thereof.
## 11. Information to the public
In case of any Distribution and/or Communication of the Work by means of
electronic communication by You (for example, by offering to download the Work
from a remote location) the distribution channel or media (for example, a
website) must at least provide to the public the information requested by the
applicable law regarding the Licensor, the Licence and the way it may be
accessible, concluded, stored and reproduced by the Licensee.
## 12. Termination of the Licence
The Licence and the rights granted hereunder will terminate automatically upon
any breach by the Licensee of the terms of the Licence.
Such a termination will not terminate the licences of any person who has
received the Work from the Licensee under the Licence, provided such persons
remain in full compliance with the Licence.
## 13. Miscellaneous
Without prejudice of Article 9 above, the Licence represents the complete
agreement between the Parties as to the Work licensed hereunder.
If any provision of the Licence is invalid or unenforceable under applicable
law, this will not affect the validity or enforceability of the Licence as a
whole. Such provision will be construed and/or reformed so as necessary to make
it valid and enforceable.
The European Commission may publish other linguistic versions and/or new
versions of this Licence, so far this is required and reasonable, without
reducing the scope of the rights granted by the Licence. New versions of the
Licence will be published with a unique version number.
All linguistic versions of this Licence, approved by the European Commission,
have identical value. Parties can take advantage of the linguistic version of
their choice.
## 14. Jurisdiction
Any litigation resulting from the interpretation of this License, arising
between the European Commission, as a Licensor, and any Licensee, will be
subject to the jurisdiction of the Court of Justice of the European Communities,
as laid down in article 238 of the Treaty establishing the European Community.
Any litigation arising between Parties, other than the European Commission, and
resulting from the interpretation of this License, will be subject to the
exclusive jurisdiction of the competent court where the Licensor resides or
conducts its primary business.
## 15. Applicable Law
This Licence shall be governed by the law of the European Union country where
the Licensor resides or has his registered office.
This licence shall be governed by the Belgian law if:
- a litigation arises between the European Commission, as a Licensor, and any
- Licensee; the Licensor, other than the European Commission, has no residence
- or registered office inside a European Union country.
## Appendix
“Compatible Licences” according to article 5 EUPL are:
- GNU General Public License (GNU GPL) v. 2
- Open Software License (OSL) v. 2.1, v. 3.0
- Common Public License v. 1.0
- Eclipse Public License v. 1.0
- Cecill v. 2.0

18
README.md Normal file
View File

@ -0,0 +1,18 @@
storage-manger-core
----
## Examples of use
## Deployment
Notes about how to deploy this component on an infrastructure or link to wiki doc (if any).
## Documentation
See storage-manager-core on [Wiki](https://gcube.wiki.gcube-system.org/gcube/Storage_Manager).
## License
TBP

65
pom.xml
View File

@ -8,9 +8,11 @@
</parent>
<groupId>org.gcube.contentmanagement</groupId>
<artifactId>storage-manager-core</artifactId>
<version>2.9.0</version>
<version>2.9.3-SNAPSHOT</version>
<properties>
<distroDirectory>${project.basedir}/distro</distroDirectory>
<maven.compiler.target>1.8</maven.compiler.target>
<maven.compiler.source>1.8</maven.compiler.source>
</properties>
<scm>
<connection>scm:git:https://code-repo.d4science.org/gCubeSystem/${project.artifactId}.git</connection>
@ -23,7 +25,7 @@
<dependency>
<groupId>org.gcube.distribution</groupId>
<artifactId>gcube-bom</artifactId>
<version>1.4.0</version>
<version>2.0.2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
@ -37,7 +39,7 @@
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.6.0</version>
<version>3.12.0</version>
</dependency>
<dependency>
<groupId>org.gcube.core</groupId>
@ -53,54 +55,11 @@
<artifactId>commons-codec</artifactId>
<version>1.8</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.32</version>
<scope>test</scope>
</dependency>
</dependencies>
<!-- <build> -->
<!-- <plugins> -->
<!-- <plugin> -->
<!-- <groupId>org.apache.maven.plugins</groupId> -->
<!-- <artifactId>maven-resources-plugin</artifactId> -->
<!-- <version>2.5</version> -->
<!-- <executions> -->
<!-- <execution> -->
<!-- <id>copy-profile</id> -->
<!-- <phase>install</phase> -->
<!-- <goals> -->
<!-- <goal>copy-resources</goal> -->
<!-- </goals> -->
<!-- <configuration> -->
<!-- <outputDirectory>target</outputDirectory> -->
<!-- <resources> -->
<!-- <resource> -->
<!-- <directory>${distroDirectory}</directory> -->
<!-- <filtering>true</filtering> -->
<!-- <includes> -->
<!-- <include>profile.xml</include> -->
<!-- </includes> -->
<!-- </resource> -->
<!-- </resources> -->
<!-- </configuration> -->
<!-- </execution> -->
<!-- </executions> -->
<!-- </plugin> -->
<!-- <plugin> -->
<!-- <groupId>org.apache.maven.plugins</groupId> -->
<!-- <artifactId>maven-assembly-plugin</artifactId> -->
<!-- -->
<!-- <configuration> -->
<!-- <descriptors> -->
<!-- <descriptor>${distroDirectory}/descriptor.xml</descriptor> -->
<!-- </descriptors> -->
<!-- </configuration> -->
<!-- <executions> -->
<!-- <execution> -->
<!-- <id>servicearchive</id> -->
<!-- <phase>install</phase> -->
<!-- <goals> -->
<!-- <goal>single</goal> -->
<!-- </goals> -->
<!-- </execution> -->
<!-- </executions> -->
<!-- </plugin> -->
<!-- </plugins> -->
<!-- </build> -->
</project>
</project>

View File

@ -1,15 +0,0 @@
log4j.rootLogger=INFO, A1, stdout
log4j.appender.A1=org.apache.log4j.RollingFileAppender
log4j.appender.A1.File=log.txt
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
# ***** Max file size is set to 100KB
log4j.appender.A1.MaxFileSize=100MB
# ***** Keep one backup file
log4j.appender.A1.MaxBackupIndex=1
#CONSOLE
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Threshold=INFO
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%t] %-5p %c %d{dd MMM yyyy ;HH:mm:ss.SSS} - %m%n

View File

@ -88,7 +88,15 @@ public class MyFile {
private String readPreference;
private String rootPath;
private boolean replace=false;
private String token;
private String region;
final Logger logger = LoggerFactory.getLogger(MyFile.class);
public MyFile(boolean lock){
setLock(lock);
@ -689,6 +697,20 @@ public class MyFile {
public void setId2(String id2) {
this.id2 = id2;
}
public String getToken() {
return token;
}
public void setToken(String token) {
this.token = token;
}
public void setRegion(String region) {
this.region=region;
}
public String getRegion() {
return region;
}
}

View File

@ -1,6 +1,7 @@
package org.gcube.contentmanagement.blobstorage.service;
import org.gcube.contentmanagement.blobstorage.resource.MemoryType;
import org.gcube.contentmanagement.blobstorage.service.impl.AmbiguousResource;
import org.gcube.contentmanagement.blobstorage.service.impl.LocalResource;
import org.gcube.contentmanagement.blobstorage.service.impl.RemoteResource;
@ -96,6 +97,7 @@ public RemoteResourceInfo renewTTL(String key);
*
* @return RemoteResource object
*/
@Deprecated
RemoteResource getUrl();
/**
@ -205,6 +207,11 @@ public RemoteResourceComplexInfo getMetaFile();
/**
* close the connections to backend storage system
*/
public void forceClose();
/**
* close the connections to backend storage system. Method restored for backward compatibility
*/
public void close();
@ -224,12 +231,16 @@ public String getId(String id);
public RemoteResource getRemotePath();
@Deprecated
public RemoteResource getHttpUrl(boolean forceCreation);
@Deprecated
public RemoteResource getHttpUrl(String backendType, boolean forceCreation);
@Deprecated
public RemoteResource getHttpUrl(String backendType);
@Deprecated
public RemoteResource getHttpUrl();
public RemoteResource getHttpsUrl(boolean forceCreation);
@ -259,4 +270,6 @@ public abstract RemoteResourceBoolean exist();
public abstract RemoteResourceBoolean exist(String backendType);
public MemoryType getGcubeMemoryType();
}

View File

@ -29,6 +29,7 @@ public class DirectoryBucket {
String path;
String[] server;
String user, password;
TransportManager tm;
public DirectoryBucket(String[] server, String user, String password, String path, String author){
if(logger.isDebugEnabled())
logger.debug("DirectoryBucket PATH: "+path);
@ -91,7 +92,7 @@ public class DirectoryBucket {
String[] bucketList=null;
bucketList=retrieveBucketsName(path, rootArea);
TransportManagerFactory tmf=new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, resource.getGcubeMemoryType(), dbNames, resource.getWriteConcern(), resource.getReadPreference());
tm=tmf.getTransport(tm, backendType, resource.getGcubeMemoryType(), dbNames, resource.getWriteConcern(), resource.getReadPreference());
// TerrastoreClient client=new TerrastoreClient( new OrderedHostManager(Arrays.asList(server)), new HTTPConnectionFactory());
for(int i=0;i<bucketList.length;i++){
if(logger.isDebugEnabled())
@ -124,7 +125,7 @@ public class DirectoryBucket {
logger.debug("bucketDir Coded: "+bucketDirCoded);
bucketList=retrieveBucketsName(bucket, rootArea);
TransportManagerFactory tmf=new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, resource.getGcubeMemoryType(), dbNames, resource.getWriteConcern(),resource.getReadPreference());
tm=tmf.getTransport(tm, backendType, resource.getGcubeMemoryType(), dbNames, resource.getWriteConcern(),resource.getReadPreference());
for(int i=0;i<bucketList.length;i++){
if(logger.isDebugEnabled())
logger.debug("REMOVE: check "+bucketList[i]+" bucketDirCoded: "+bucketDirCoded );

View File

@ -8,7 +8,6 @@ import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.DirectoryBucket;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.DirectoryEntity;
import org.gcube.contentmanagement.blobstorage.service.operation.OperationManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
@ -27,7 +26,7 @@ import org.gcube.contentmanagement.blobstorage.resource.StorageObject;
*/
public class RemoteResource extends Resource{
TransportManager tm;
public RemoteResource(MyFile file, ServiceEngine engine) {
super(file, engine);
@ -112,7 +111,7 @@ public class RemoteResource extends Resource{
if(engine.getCurrentOperation().equalsIgnoreCase("showdir")){
dir = new BucketCoding().bucketDirCoding(dir, engine.getContext());
TransportManagerFactory tmf= new TransportManagerFactory(engine.primaryBackend, engine.getBackendUser(), engine.getBackendPassword());
TransportManager tm=tmf.getTransport(engine.getBackendType(), engine.getGcubeMemoryType(), engine.getDbNames(), engine.getWriteConcern(), engine.getReadConcern());
tm=tmf.getTransport(tm, engine.getBackendType(), engine.getGcubeMemoryType(), engine.getDbNames(), engine.getWriteConcern(), engine.getReadConcern());
Map<String, StorageObject> mapDirs=null;
try {
mapDirs = tm.getValues(getMyFile(), dir, DirectoryEntity.class);
@ -133,7 +132,7 @@ public class RemoteResource extends Resource{
dirBuc.removeDirBucket(getMyFile(), dir, engine.getContext(), engine.getBackendType(), engine.getDbNames());
else{
TransportManagerFactory tmf=new TransportManagerFactory(engine.primaryBackend, engine.getBackendUser(), engine.getBackendPassword());
TransportManager tm=tmf.getTransport(Costants.CLIENT_TYPE, engine.getGcubeMemoryType(), engine.getDbNames(), engine.getWriteConcern(), engine.getReadConcern());
tm=tmf.getTransport(tm, Costants.CLIENT_TYPE, engine.getGcubeMemoryType(), engine.getDbNames(), engine.getWriteConcern(), engine.getReadConcern());
dir=new BucketCoding().bucketFileCoding(dir, engine.getContext());
try {
tm.removeDir(dir, getMyFile());

View File

@ -1,9 +1,9 @@
package org.gcube.contentmanagement.blobstorage.service.impl;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import org.gcube.contentmanagement.blobstorage.resource.AccessType;
@ -17,7 +17,6 @@ import org.gcube.contentmanagement.blobstorage.service.directoryOperation.Bucket
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.Encrypter;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.Encrypter.EncryptionException;
import org.gcube.contentmanagement.blobstorage.service.operation.*;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.slf4j.Logger;
@ -71,13 +70,19 @@ public class ServiceEngine implements IClient {
private String user;
//backend server password
private String password;
// if the backend is mongodb, this field is used for crypt/decrypt. If the backend is S3, this field is a token.
private String passPhrase;
private String resolverHost;
private String[] dbNames;
// private static final String DEFAULT_RESOLVER_HOST= "data.d4science.org";
private String write;
private String read;
private String token;
private String region;
public ServiceEngine(String[] server){
this.primaryBackend=server;
}
@ -142,24 +147,24 @@ public class ServiceEngine implements IClient {
}
public String getPublicArea() {
private String getPublicArea() {
return publicArea;
}
public void setPublicArea(String publicArea) {
private void setPublicArea(String publicArea) {
logger.trace("public area is "+publicArea);
this.publicArea = publicArea;
}
public String getHomeArea() {
private String getHomeArea() {
return homeArea;
}
public void setHomeArea(String rootPath) {
private void setHomeArea(String rootPath) {
this.homeArea = rootPath;
}
public String getEnvironment() {
private String getEnvironment() {
return environment;
}
@ -167,7 +172,7 @@ public class ServiceEngine implements IClient {
* set the remote root path
* @param environment
*/
public void setEnvironment(String environment) {
private void setEnvironment(String environment) {
// delete initial / from variable environment
String newEnv=environment;
int ind=newEnv.indexOf('/');
@ -179,11 +184,11 @@ public class ServiceEngine implements IClient {
this.environment = newEnv;
}
public String getBucketID() {
private String getBucketID() {
return bucketID;
}
public void setBucketID(String bucketID) {
private void setBucketID(String bucketID) {
this.bucketID=bucketID;
}
@ -210,7 +215,7 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("download");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.DOWNLOAD);
return new LocalResource(file, this);
}
@ -235,7 +240,7 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("getSize");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.GET_SIZE);
return new RemoteResourceInfo(file, this);
}
@ -248,7 +253,7 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("getMetaFile");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.GET_META_FILE);
return new RemoteResourceComplexInfo(file, this);
}
@ -259,7 +264,7 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("getTotalUserVolume");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.GET_TOTAL_USER_VOLUME);
file = new Resource(file, this).setGenericProperties(getContext(), owner, null, "remote");
file.setRemotePath("/");
@ -291,7 +296,7 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("getTotalUserItems");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.GET_USER_TOTAL_ITEMS);
file = new Resource(file, this).setGenericProperties(getContext(), owner, "", "remote");
file.setRemotePath("/");
@ -323,7 +328,7 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("getFolderSize");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.GET_FOLDER_TOTAL_VOLUME);
return new RemoteResourceFolderInfo(file, this);
}
@ -334,7 +339,7 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("getFolderCount");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.GET_FOLDER_TOTAL_ITEMS);
return new RemoteResourceFolderInfo(file, this);
}
@ -345,7 +350,7 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("getFolderLastUpdate");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.GET_FOLDER_LAST_UPDATE);
return new RemoteResourceFolderInfo(file, this);
}
@ -365,7 +370,7 @@ public class ServiceEngine implements IClient {
}
setCurrentOperation("upload");
setReplaceOption(replace);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.UPLOAD);
file.setReplaceOption(replace);
return new LocalResource(file, this);
@ -387,7 +392,7 @@ public class ServiceEngine implements IClient {
}
setCurrentOperation("upload");
setReplaceOption(replace);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.UPLOAD);
file=setMimeType(file, mimeType);
file.setReplaceOption(replace);
@ -416,7 +421,7 @@ public class ServiceEngine implements IClient {
// remove object operation
setCurrentOperation("remove");
file=setOperationInfo(file, OPERATION.REMOVE);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
return new RemoteResource(file, this);
}
@ -533,7 +538,7 @@ public class ServiceEngine implements IClient {
file.setPassPhrase(passPhrase);
setCurrentOperation("getUrl");
file=setOperationInfo(file, OPERATION.GET_URL);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
RemoteResource resource=new RemoteResource(file, this);
return resource;
}
@ -568,7 +573,7 @@ public class ServiceEngine implements IClient {
file.setPassPhrase(passPhrase);
setCurrentOperation("getHttpUrl");
file=setOperationInfo(file, OPERATION.GET_HTTP_URL);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
RemoteResource resource=new RemoteResource(file, this);
return resource;
}
@ -605,7 +610,7 @@ public class ServiceEngine implements IClient {
file.setPassPhrase(passPhrase);
setCurrentOperation("getHttpsUrl");
file=setOperationInfo(file, OPERATION.GET_HTTPS_URL);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
RemoteResource resource=new RemoteResource(file, this);
return resource;
}
@ -669,7 +674,7 @@ public class ServiceEngine implements IClient {
backendType=setBackendType(backendType);
file = new MyFile(true);
setCurrentOperation("lock");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.LOCK);
return new AmbiguousResource(file, this);
}
@ -688,7 +693,7 @@ public class ServiceEngine implements IClient {
// put(true);
setCurrentOperation("unlock");
file=setOperationInfo(file, OPERATION.UNLOCK);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
return new AmbiguousResource(file, this);
}
@ -705,7 +710,7 @@ public class ServiceEngine implements IClient {
// put(true);
setCurrentOperation("getTTL");
file=setOperationInfo(file, OPERATION.GET_TTL);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
return new RemoteResourceInfo(file, this);
}
@ -723,7 +728,7 @@ public class ServiceEngine implements IClient {
file.setGenericPropertyField(field);
setCurrentOperation("getMetaInfo");
file=setOperationInfo(file, OPERATION.GET_META_INFO);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
return new RemoteResource(file, this);
}
@ -740,7 +745,7 @@ public class ServiceEngine implements IClient {
file.setGenericPropertyValue(value);
setCurrentOperation("setMetaInfo");
file=setOperationInfo(file, OPERATION.SET_META_INFO);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
return new RemoteResource(file, this);
}
@ -757,7 +762,7 @@ public class ServiceEngine implements IClient {
// put(true);
setCurrentOperation("renewTTL");
file=setOperationInfo(file, OPERATION.RENEW_TTL);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
return new RemoteResourceInfo(file, this);
}
@ -774,7 +779,7 @@ public class ServiceEngine implements IClient {
file=null;
setCurrentOperation("link");
file=setOperationInfo(file, OPERATION.LINK);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
return new RemoteResourceSource(file, this);
}
@ -803,7 +808,7 @@ public class ServiceEngine implements IClient {
setCurrentOperation("copy");
file=setOperationInfo(file, OPERATION.COPY);
file.setReplaceOption(replaceOption);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
return new RemoteResourceSource(file, this);
}
@ -820,7 +825,7 @@ public class ServiceEngine implements IClient {
file=null;
setCurrentOperation("duplicate");
file=setOperationInfo(file, OPERATION.DUPLICATE);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
return new RemoteResource(file, this);
}
@ -846,7 +851,7 @@ public class ServiceEngine implements IClient {
setCurrentOperation("softcopy");
file=setOperationInfo(file, OPERATION.SOFT_COPY);
file.setReplaceOption(replaceOption);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
return new RemoteResourceSource(file, this);
}
@ -865,7 +870,7 @@ public class ServiceEngine implements IClient {
file=null;
setCurrentOperation("move");
file=setOperationInfo(file, OPERATION.MOVE);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
return new RemoteResourceSource(file, this);
}
@ -882,7 +887,7 @@ public class ServiceEngine implements IClient {
file=null;
setCurrentOperation("copy_dir");
file=setOperationInfo(file, OPERATION.COPY_DIR);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
return new RemoteResourceSource(file, this);
}
@ -899,18 +904,18 @@ public class ServiceEngine implements IClient {
file=null;
setCurrentOperation("move_dir");
file=setOperationInfo(file, OPERATION.MOVE_DIR);
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), getMyFile(), backendType, getDbNames(), getToken());
return new RemoteResourceSource(file, this);
}
@Override
public void close(){
currentOperation="close";
public void forceClose(){
currentOperation="forceclose";
file.setOwner(owner);
getMyFile().setRemoteResource(REMOTE_RESOURCE.PATH);
setMyFile(file);
service.setResource(getMyFile());
service.setTypeOperation("close");
service.setTypeOperation("forceclose");
try {
if(((file.getInputStream() != null) || (file.getOutputStream()!=null)) || ((file.getLocalPath() != null) || (file.getRemotePath() != null)))
service.startOperation(file,file.getRemotePath(), owner, primaryBackend, Costants.DEFAULT_CHUNK_OPTION, getContext(), isReplaceOption());
@ -924,6 +929,26 @@ public class ServiceEngine implements IClient {
}
}
@Override
public void close(){
currentOperation="close";
file.setOwner(owner);
getMyFile().setRemoteResource(REMOTE_RESOURCE.PATH);
setMyFile(file);
service.setResource(getMyFile());
service.setTypeOperation("forceclose");
try {
if(((file.getInputStream() != null) || (file.getOutputStream()!=null)) || ((file.getLocalPath() != null) || (file.getRemotePath() != null)))
service.startOperation(file,file.getRemotePath(), owner, primaryBackend, Costants.DEFAULT_CHUNK_OPTION, getContext(), isReplaceOption());
else{
logger.error("parameters incompatible ");
}
} catch (Throwable t) {
logger.error("get()", t.getCause());
throw new RemoteBackendException(" Error in "+currentOperation+" operation ", t.getCause());
}
}
public String getServiceClass() {
@ -1002,6 +1027,10 @@ public class ServiceEngine implements IClient {
file.setWriteConcern(getWriteConcern());
if(getReadConcern() != null)
file.setReadPreference(getReadConcern());
if(!Objects.isNull(getToken()))
file.setToken(getToken());
if(!Objects.isNull(getRegion()))
file.setRegion(getRegion());
return file;
}
@ -1063,19 +1092,24 @@ public class ServiceEngine implements IClient {
public String getId(String id){
if(ObjectId.isValid(id))
return id;
try {
if(Base64.isBase64(id)){
byte[] valueDecoded= Base64.decodeBase64(id);
String encryptedID = new String(valueDecoded);
return new Encrypter("DES", getPassPhrase()).decrypt(encryptedID);
}else{
return new Encrypter("DES", getPassPhrase()).decrypt(id);
if (getBackendType().equals("MongoDB")){
if(ObjectId.isValid(id))
return id;
try {
if(Base64.isBase64(id)){
byte[] valueDecoded= Base64.decodeBase64(id);
String encryptedID = new String(valueDecoded);
return new Encrypter("DES", getPassPhrase()).decrypt(encryptedID);
}else{
return new Encrypter("DES", getPassPhrase()).decrypt(id);
}
} catch (EncryptionException e) {
e.printStackTrace();
}
} catch (EncryptionException e) {
e.printStackTrace();
}else {
throw new RemoteBackendException("THe backend is not mongodb, the id cannot be decrypted because it should be not crypted");
}
return null;
}
@ -1086,7 +1120,7 @@ public class ServiceEngine implements IClient {
setCurrentOperation("getRemotePath");
file=setOperationInfo(file, OPERATION.GET_REMOTE_PATH);
file.setRootPath(this.getPublicArea());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
return new RemoteResource(file, this);
}
@ -1134,7 +1168,7 @@ public class ServiceEngine implements IClient {
}
protected String[] getDbNames(){
public String[] getDbNames(){
return this.dbNames;
}
@ -1157,9 +1191,24 @@ public class ServiceEngine implements IClient {
logger.debug("get() - start");
}
setCurrentOperation("exist");
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames());
this.service=new OperationManager(primaryBackend, user, password, getCurrentOperation(), file, backendType, getDbNames(), getToken());
file=setOperationInfo(file, OPERATION.EXIST);
return new RemoteResourceBoolean(file, this);
}
public String getToken() {
return token;
}
public void setToken(String token) {
this.token = token;
}
public String getRegion() {
return region;
}
public void setRegion(String region) {
this.region = region;
}
}

View File

@ -117,7 +117,7 @@ public class ChunkConsumer implements Runnable {
synchronized(ChunkConsumer.class){
String [] randomServer=randomizeServer(server);
TransportManagerFactory tmf=new TransportManagerFactory(randomServer, null, null);
client.set(tmf.getTransport(Costants.CLIENT_TYPE, null, null, myFile.getWriteConcern(), myFile.getReadPreference()));
client.set(tmf.getTransport(null, Costants.CLIENT_TYPE, null, null, myFile.getWriteConcern(), myFile.getReadPreference()));
}
if(logger.isDebugEnabled()){
logger.debug("waiting time for upload: "

View File

@ -4,7 +4,6 @@ import java.net.UnknownHostException;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
@ -38,8 +37,7 @@ public abstract class Copy extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String id=null;
try {
// id=tm.copy(myFile, sourcePath, destinationPath);

View File

@ -5,7 +5,6 @@ import java.util.List;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
@ -41,8 +40,7 @@ public abstract class CopyDir extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm = getTransport(myFile);
List<String> ids=null;
try {
// ids=tm.copyDir(myFile, sourcePath, destinationPath);
@ -54,6 +52,8 @@ public abstract class CopyDir extends Operation{
}
return ids.toString();
}
@Override

View File

@ -4,7 +4,6 @@ import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
@ -51,8 +50,9 @@ public abstract class Download extends Operation{
id=get(this, myFile, false);
} catch (Throwable e) {
TransportManagerFactory tmf=new TransportManagerFactory(getServer(), getUser(), getPassword());
TransportManager tm=tmf.getTransport(getBackendType(), myFile.getGcubeMemoryType(), getDbNames(), myFile.getWriteConcern(), myFile.getReadPreference());
// TransportManagerFactory tmf=new TransportManagerFactory(getServer(), getUser(), getPassword());
// TransportManager tm=tmf.getTransport(getBackendType(), myFile.getGcubeMemoryType(), getDbNames(), myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
tm.close();
logger.error("Problem in download from: "+myFile.getRemotePath()+": "+e.getMessage());
// e.printStackTrace();

View File

@ -4,7 +4,6 @@ import java.io.OutputStream;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.operation.DownloadOperator;
import org.slf4j.Logger;
@ -40,8 +39,7 @@ public class DownloadAndLock extends Operation {
//TODO add field for file lock
get(download,myFile, true);
} catch (Exception e) {
TransportManagerFactory tmf=new TransportManagerFactory(getServer(), getUser(), getPassword());
TransportManager tm=tmf.getTransport(getBackendType(), myFile.getGcubeMemoryType(), getDbNames(), myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
tm.close();
throw new RemoteBackendException(" Error in downloadAndLock operation ", e.getCause());
}

View File

@ -7,7 +7,6 @@ import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
@ -31,8 +30,7 @@ public abstract class DuplicateFile extends Operation {
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String id=null;
try {
// id = tm.duplicateFile(myFile, bucket);

View File

@ -8,7 +8,6 @@ import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendEx
import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -31,8 +30,7 @@ public class Exist extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
boolean isPresent=false;
try {
isPresent = tm.exist(bucket);

View File

@ -23,12 +23,7 @@ public class FileWriter extends Thread{
final Logger logger=LoggerFactory.getLogger(FileWriter.class);
private Monitor monitor;
private int id;
// private MyFile myFile;
// private byte[] encode;
// private int offset;
// private static int len=0;
private OutputStream out;
// private String path;
private byte[] full;

View File

@ -2,28 +2,25 @@ package org.gcube.contentmanagement.blobstorage.service.operation;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Close extends Operation{
public class ForceClose extends Operation{
/**
* Logger for this class
*/
final Logger logger=LoggerFactory.getLogger(GetSize.class);
// public String file_separator = ServiceEngine.FILE_SEPARATOR;//System.getProperty("file.separator");
public Close(String[] server, String user, String pwd, String bucket, Monitor monitor, boolean isChunk, String backendType, String[] dbs) {
public ForceClose(String[] server, String user, String pwd, String bucket, Monitor monitor, boolean isChunk, String backendType, String[] dbs) {
super(server, user, pwd, bucket, monitor, isChunk, backendType, dbs);
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
try {
tm.close();
tm.forceClose();
} catch (Exception e) {
throw new RemoteBackendException(" Error in GetSize operation ", e.getCause()); }
if (logger.isDebugEnabled()) {

View File

@ -3,9 +3,7 @@ package org.gcube.contentmanagement.blobstorage.service.operation;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.DirectoryBucket;
import org.gcube.contentmanagement.blobstorage.service.impl.ServiceEngine;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.slf4j.Logger;
@ -23,8 +21,7 @@ public class GetFolderCount extends Operation {
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
long dim=0;
try {
dim = tm.getFolderTotalItems(bucket);

View File

@ -4,7 +4,6 @@ import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.DirectoryBucket;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.slf4j.Logger;
@ -22,8 +21,7 @@ public class GetFolderSize extends Operation {
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
long dim=0;
try {
dim = tm.getFolderTotalVolume(bucket);

View File

@ -12,6 +12,12 @@ import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
/**
* this class is replaced by getHttpsUrl
* @author roberto
*
*/
@Deprecated
public class GetHttpUrl extends Operation {
// private OutputStream os;
@ -46,7 +52,8 @@ public class GetHttpUrl extends Operation {
String urlBase="smp://"+resolverHost+Costants.URL_SEPARATOR;
String urlParam="";
try {
String id=getId(myFile.getAbsoluteRemotePath(), myFile.isForceCreation(), myFile.getGcubeMemoryType(), myFile.getWriteConcern(), myFile.getReadPreference());
// String id=getId(myFile.getAbsoluteRemotePath(), myFile.isForceCreation(), myFile.getGcubeMemoryType(), myFile.getWriteConcern(), myFile.getReadPreference());
String id=getId(myFile);
String phrase=myFile.getPassPhrase();
// urlParam =new StringEncrypter("DES", phrase).encrypt(id);
urlParam = new Encrypter("DES", phrase).encrypt(id);
@ -71,13 +78,11 @@ public class GetHttpUrl extends Operation {
return httpUrl.toString();
}
@Deprecated
private String getId(String path, boolean forceCreation, MemoryType memoryType, String writeConcern, String readPreference){
String id=null;
if(tm ==null){
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
tm=tmf.getTransport(backendType, memoryType, dbNames, writeConcern, readPreference);
}
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
tm=tmf.getTransport(tm, backendType, memoryType, dbNames, writeConcern, readPreference);
try {
id = tm.getId(bucket, forceCreation);
} catch (Exception e) {
@ -89,6 +94,21 @@ public class GetHttpUrl extends Operation {
return id;
}
private String getId(MyFile myFile){
String id=null;
TransportManager tm=getTransport(myFile);
try {
id = tm.getId(bucket, myFile.isForceCreation());
} catch (Exception e) {
tm.close();
throw new RemoteBackendException(" Error in GetUrl operation. Problem to discover remote file:"+bucket+" "+ e.getMessage(), e.getCause()); }
if (logger.isDebugEnabled()) {
logger.debug(" PATH " + bucket);
}
return id;
}
private URL translate(URL url) throws IOException {
logger.debug("translating: "+url);
String urlString=url.toString();

View File

@ -49,6 +49,7 @@ public class GetHttpsUrl extends Operation {
String urlParam="";
try {
String id=getId(myFile.getAbsoluteRemotePath(), myFile.isForceCreation(), myFile.getGcubeMemoryType(), myFile.getWriteConcern(), myFile.getReadPreference());
// String id=getId(myFile);
String phrase=myFile.getPassPhrase();
// urlParam =new StringEncrypter("DES", phrase).encrypt(id);
urlParam = new Encrypter("DES", phrase).encrypt(id);
@ -73,12 +74,25 @@ public class GetHttpsUrl extends Operation {
return httpsUrl.toString();
}
private String getId(MyFile myFile){
String id=null;
TransportManager tm=getTransport(myFile);
try {
id = tm.getId(bucket, myFile.isForceCreation());
} catch (Exception e) {
tm.close();
throw new RemoteBackendException(" Error in GetUrl operation. Problem to discover remote file:"+bucket+" "+ e.getMessage(), e.getCause()); }
if (logger.isDebugEnabled()) {
logger.debug(" PATH " + bucket);
}
return id;
}
@Deprecated
private String getId(String path, boolean forceCreation, MemoryType memoryType, String writeConcern, String readPreference){
String id=null;
if(tm ==null){
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
tm=tmf.getTransport(backendType, memoryType, dbNames, writeConcern, readPreference);
}
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
tm=tmf.getTransport(tm, backendType, memoryType, dbNames, writeConcern, readPreference);
try {
id = tm.getId(bucket, forceCreation);
} catch (Exception e) {

View File

@ -4,7 +4,6 @@ import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -31,13 +30,12 @@ public class GetMetaFile extends Operation{
*
*/
public MyFile doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
long dim=0;
String id=null;
String mime=null;
try {
dim = tm.getSize(bucket);
dim = tm.getSize(bucket, myFile);
id=tm.getId(bucket, false);
mime=tm.getFileProperty(bucket, "mimetype");
myFile.setOwner(tm.getFileProperty(bucket, "owner"));

View File

@ -4,7 +4,6 @@ import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -21,8 +20,7 @@ public class GetMetaInfo extends Operation {
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String value=null;
try {
value=tm.getFileProperty(bucket, myFile.getGenericPropertyField());

View File

@ -3,7 +3,6 @@ package org.gcube.contentmanagement.blobstorage.service.operation;
import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -21,8 +20,7 @@ public class GetRemotePath extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String path=null;
try {
path = tm.getRemotePath(bucket);

View File

@ -4,7 +4,6 @@ import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -28,11 +27,10 @@ public class GetSize extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
long dim=0;
try {
dim = tm.getSize(bucket);
dim = tm.getSize(bucket, myFile);
} catch (Exception e) {
tm.close();
throw new RemoteBackendException(" Error in GetSize operation ", e.getCause()); }

View File

@ -4,7 +4,6 @@ import java.io.OutputStream;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -36,8 +35,7 @@ public class GetTTL extends Operation {
TransportManager tm=null;
try {
//aggiungere field per il lock del file
TransportManagerFactory tmf=new TransportManagerFactory(server, user, password);
tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
tm=getTransport(myFile);
currentTTL=tm.getTTL(bucket);
} catch (Exception e) {
tm.close();

View File

@ -9,8 +9,12 @@ import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
/**
* this class is replaced by getHttpsUrl
* @author roberto
*
*/
@Deprecated
public class GetUrl extends Operation{
// private OutputStream os;
@ -40,11 +44,10 @@ public class GetUrl extends Operation{
String urlBase="smp://"+resolverHost+Costants.URL_SEPARATOR;
String urlParam="";
try {
String id=getId(myFile.getAbsoluteRemotePath(), myFile.isForceCreation(), myFile.getGcubeMemoryType(), myFile.getWriteConcern(), myFile.getReadPreference());
// String id=getId(myFile.getAbsoluteRemotePath(), myFile.isForceCreation(), myFile.getGcubeMemoryType(), myFile.getWriteConcern(), myFile.getReadPreference());
String id=getId(myFile);
String phrase=myFile.getPassPhrase();
// urlParam =new StringEncrypter("DES", phrase).encrypt(id);
urlParam = new Encrypter("DES", phrase).encrypt(id);
// String urlEncoded=URLEncoder.encode(urlParam, "UTF-8");
} catch (EncryptionException e) {
throw new RemoteBackendException(" Error in getUrl operation problem to encrypt the string", e.getCause());
}
@ -56,12 +59,11 @@ public class GetUrl extends Operation{
return url;
}
@Deprecated
private String getId(String path, boolean forceCreation, MemoryType memoryType, String writeConcern, String readPreference){
String id=null;
if(tm ==null){
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
tm=tmf.getTransport(backendType, memoryType, dbNames, writeConcern, readPreference);
}
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
tm=tmf.getTransport(tm, backendType, memoryType, dbNames, writeConcern, readPreference);
try {
id = tm.getId(bucket, forceCreation);
} catch (Exception e) {
@ -73,4 +75,18 @@ public class GetUrl extends Operation{
return id;
}
private String getId(MyFile myFile){
String id=null;
TransportManager tm=getTransport(myFile);
try {
id = tm.getId(bucket, myFile.isForceCreation());
} catch (Exception e) {
tm.close();
throw new RemoteBackendException(" Error in GetUrl operation. Problem to discover remote file:"+bucket+" "+ e.getMessage(), e.getCause()); }
if (logger.isDebugEnabled()) {
logger.debug(" PATH " + bucket);
}
return id;
}
}

View File

@ -4,7 +4,6 @@ import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.DirectoryBucket;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.slf4j.Logger;
@ -20,8 +19,7 @@ public class GetUserTotalItems extends Operation {
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String dim=null;
logger.info("check user total items for user: "+getOwner()+ " user is "+user);
try {

View File

@ -4,7 +4,6 @@ import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.DirectoryBucket;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.slf4j.Logger;
@ -13,15 +12,13 @@ import org.slf4j.LoggerFactory;
public class GetUserTotalVolume extends Operation {
final Logger logger=LoggerFactory.getLogger(GetUserTotalVolume.class);
// public String file_separator = ServiceEngine.FILE_SEPARATOR;//System.getProperty("file.separator");
public GetUserTotalVolume(String[] server, String user, String pwd, String bucket, Monitor monitor, boolean isChunk, String backendType, String[] dbs) {
super(server, user, pwd, bucket, monitor, isChunk, backendType, dbs);
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String dim=null;
logger.info("check user total volume for user: "+getOwner()+ " user is "+user);
try {

View File

@ -5,7 +5,6 @@ import java.net.UnknownHostException;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
@ -37,8 +36,7 @@ public abstract class Link extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String id=null;
try {
id=tm.link(this);

View File

@ -44,8 +44,7 @@ public abstract class Lock extends Operation {
Download download = new DownloadOperator(getServer(), getUser(), getPassword(), getBucket(), getMonitor(), isChunk(), getBackendType(), getDbNames());
unlockKey=get(download, myFile, true);
} catch (Exception e) {
TransportManagerFactory tmf=new TransportManagerFactory(getServer(), getUser(), getPassword());
TransportManager tm=tmf.getTransport(getBackendType(), myFile.getGcubeMemoryType(), getDbNames(), myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
tm.close();
throw new RemoteBackendException(" Error in lock operation ", e.getCause());
}

View File

@ -1,14 +1,11 @@
package org.gcube.contentmanagement.blobstorage.service.operation;
import java.io.OutputStream;
import java.net.UnknownHostException;
import org.gcube.contentmanagement.blobstorage.resource.MemoryType;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.DirectoryBucket;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
@ -40,11 +37,9 @@ public abstract class Move extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String id=null;
try {
// id=tm.move(myFile, sourcePath, destinationPath);
id=tm.move(this);
} catch (UnknownHostException e) {
tm.close();

View File

@ -7,7 +7,6 @@ import org.gcube.contentmanagement.blobstorage.resource.MemoryType;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
@ -39,8 +38,7 @@ public abstract class MoveDir extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
List<String>ids=null;
try {
ids=tm.moveDir(this);

View File

@ -3,7 +3,6 @@ package org.gcube.contentmanagement.blobstorage.service.operation;
import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.service.impl.ServiceEngine;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
@ -40,6 +39,7 @@ public abstract class Operation {
private Monitor monitor;
private boolean isChunk;
String backendType;
protected static TransportManager transport;
public Operation(String[] server, String user, String pwd, String bucket, Monitor monitor, boolean isChunk, String backendType, String[] dbs){
this.server=server;
@ -159,8 +159,7 @@ public abstract class Operation {
}else{
if(logger.isDebugEnabled())
logger.debug("NO THREAD POOL USED");
TransportManagerFactory tmf=new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, resource.getGcubeMemoryType(), dbNames, resource.getWriteConcern(), resource.getReadPreference());
TransportManager tm=getTransport(resource);
String objectId=tm.uploadManager(upload, resource, bucket, bucket+"_1", replaceOption);
return objectId;
}
@ -177,12 +176,7 @@ public abstract class Operation {
logger.debug("get(String) - start");
}
String unlocKey=null;
TransportManagerFactory tmf=null;
// if(server.length >1)
tmf=new TransportManagerFactory(server, user, password);
// else
// tmf=new TransportManagerFactory(server, null, null);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
long start=System.currentTimeMillis();
String path=myFile.getLocalPath();
if(!Costants.CLIENT_TYPE.equalsIgnoreCase("mongo")){
@ -380,6 +374,10 @@ public abstract class Operation {
this.user = user;
}
protected TransportManager getTransport(MyFile myFile) {
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
transport=tmf.getTransport(transport, backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
return transport;
}
}
}

View File

@ -25,7 +25,9 @@ public class OperationFactory {
Monitor monitor;
boolean isChunk;
private String backendType;
private String token;
public OperationFactory(String server[], String user, String pwd, String bucket, Monitor monitor2, boolean isChunk, String backendType, String[] dbs){
this.server=server;
this.user=user;
@ -50,6 +52,8 @@ public class OperationFactory {
op=new Remove(server, user, password, bucket, monitor, isChunk, backendType, dbNames);
}else if(operation.equalsIgnoreCase("getSize")){
op=new GetSize(server, user, password, bucket, monitor, isChunk, backendType, dbNames);
}else if(operation.equalsIgnoreCase("forceclose")){
op=new ForceClose(server, user, password, bucket, monitor, isChunk, backendType, dbNames);
}else if(operation.equalsIgnoreCase("duplicate")){
op=new DuplicateOperator(server, user, password, bucket, monitor, isChunk, backendType, dbNames);
}else if(operation.equalsIgnoreCase("softcopy")){
@ -111,4 +115,12 @@ public class OperationFactory {
return op;
}
public String getToken() {
return token;
}
public void setToken(String token) {
this.token = token;
}
}

View File

@ -32,7 +32,8 @@ public class OperationManager {
private String[] dbNames;
public OperationManager(String[] server, String user, String password, String operation, MyFile myFile, String backendType, String[] dbs){
public OperationManager(String[] server, String user, String password, String operation, MyFile myFile, String backendType, String[] dbs, String token){
this.setServer(server);
this.setUser(user);
this.setPassword(password);
@ -41,6 +42,7 @@ public class OperationManager {
this.setTypeOperation(operation);
this.setDbNames(dbs);
this.backendType=backendType;
}
public Object startOperation(MyFile file, String remotePath, String author, String[] server, boolean chunkOpt, String rootArea, boolean replaceOption) throws RemoteBackendException{
@ -144,5 +146,6 @@ public class OperationManager {
this.dbNames = dbNames;
}
}

View File

@ -3,7 +3,6 @@ package org.gcube.contentmanagement.blobstorage.service.operation;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.slf4j.Logger;
@ -25,8 +24,7 @@ public class Remove extends Operation{
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
removeBucket(tm, bucket, myFile);
if (logger.isDebugEnabled()) {
logger.debug(" REMOVE " + bucket);

View File

@ -5,7 +5,6 @@ import java.io.OutputStream;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -30,8 +29,7 @@ public class RenewTTL extends Operation {
@Override
public String doIt(MyFile myFile) throws RemoteBackendException {
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
long ttl=-1;
try {
myFile.setRemotePath(bucket);

View File

@ -4,7 +4,6 @@ import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -21,14 +20,14 @@ public class SetMetaInfo extends Operation {
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
try {
tm.setFileProperty(bucket, myFile.getGenericPropertyField(), myFile.getGenericPropertyValue());
} catch (Exception e) {
tm.close();
e.printStackTrace();
throw new RemoteBackendException(" Error in SetMetaInfo operation ", e.getCause()); }
logger.error("Problem setting file property", e);
throw new RemoteBackendException(" Error in SetMetaInfo operation ", e); }
if (logger.isDebugEnabled()) {
logger.debug(" PATH " + bucket);
}

View File

@ -5,11 +5,9 @@ package org.gcube.contentmanagement.blobstorage.service.operation;
import java.net.UnknownHostException;
import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.slf4j.Logger;
@ -35,21 +33,6 @@ public abstract class SoftCopy extends Operation {
}
public String initOperation(MyFile file, String remotePath, String author, String[] server, String rootArea, boolean replaceOption) {
// if(remotePath != null){
// boolean isId=ObjectId.isValid(remotePath);
// setResource(file);
// if(!isId){
//// String[] dirs= remotePath.split(file_separator);
// if(logger.isDebugEnabled())
// logger.debug("remotePath: "+remotePath);
// String buck=null;
// buck = new BucketCoding().bucketFileCoding(remotePath, rootArea);
// return bucket=buck;
// }else{
// return bucket=remotePath;
// }
// }return bucket=null;//else throw new RemoteBackendException("argument cannot be null");
this.sourcePath=file.getLocalPath();
this.destinationPath=remotePath;
sourcePath = new BucketCoding().bucketFileCoding(file.getLocalPath(), rootArea);
@ -60,8 +43,7 @@ public abstract class SoftCopy extends Operation {
}
public String doIt(MyFile myFile) throws RemoteBackendException{
TransportManagerFactory tmf= new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
String id=null;
try {
id=tm.softCopy(this);
@ -83,20 +65,6 @@ public abstract class SoftCopy extends Operation {
destinationPath = new BucketCoding().bucketFileCoding(resource.getRemotePath(), rootArea);
setResource(resource);
return bucket=destinationPath;
// if(remotePath != null){
// boolean isId=ObjectId.isValid(remotePath);
// setResource(resource);
// if(!isId){
//// String[] dirs= remotePath.split(file_separator);
// if(logger.isDebugEnabled())
// logger.debug("remotePath: "+remotePath);
// String buck=null;
// buck = new BucketCoding().bucketFileCoding(remotePath, rootArea);
// return bucket=buck;
// }else{
// return bucket=remotePath;
// }
// }return bucket=null;//else throw new RemoteBackendException("argument cannot be null");
}
public abstract String execute(MongoIOManager mongoPrimaryInstance, MyFile resource, String sourcePath, String destinationPath) throws UnknownHostException;

View File

@ -5,7 +5,6 @@ import java.io.OutputStream;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.operation.UploadOperator;
@ -43,8 +42,7 @@ public abstract class Unlock extends Operation {
//inserire parametro per il lock
objectId=put(upload, myFile, isChunk(), false, false, true);
} catch (Exception e) {
TransportManagerFactory tmf=new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=getTransport(myFile);
tm.close();
throw new RemoteBackendException(" Error in unlock operation ", e.getCause());
}

View File

@ -7,7 +7,6 @@ import java.io.OutputStream;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.service.directoryOperation.BucketCoding;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.TransportManagerFactory;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
@ -49,8 +48,8 @@ public abstract class Upload extends Operation {
try {
objectId=put(this, myFile, isChunk(), false, replaceOption, false);
} catch (Throwable e) {
TransportManagerFactory tmf=new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
e.printStackTrace();
TransportManager tm=getTransport(myFile);
tm.close();
logger.error("Problem in upload from: "+myFile.getLocalPath()+": "+e.getMessage());
throw new RemoteBackendException(" Error in upload operation ", e.getCause());

View File

@ -33,9 +33,10 @@ public class UploadAndUnlock extends Operation {
objectId=put(upload, myFile, isChunk(), false, false, true);
} catch (Exception e) {
TransportManagerFactory tmf=new TransportManagerFactory(server, user, password);
TransportManager tm=tmf.getTransport(backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
TransportManager tm=tmf.getTransport(transport, backendType, myFile.getGcubeMemoryType(), dbNames, myFile.getWriteConcern(), myFile.getReadPreference());
tm.close();
throw new RemoteBackendException(" Error in uploadAndUnlock operation ", e.getCause()); }
throw new RemoteBackendException(" Error in uploadAndUnlock operation ", e);
}
return objectId;
}

View File

@ -1,28 +0,0 @@
package org.gcube.contentmanagement.blobstorage.test;
import java.util.List;
import org.gcube.contentmanagement.blobstorage.service.IClient;
import org.gcube.contentmanagement.blobstorage.service.impl.ServiceEngine;
import org.gcube.contentmanagement.blobstorage.transport.backend.RemoteBackendException;
import org.gcube.contentmanagement.blobstorage.resource.StorageObject;
public class SimpleTest2 {
public static void main(String[] args) throws RemoteBackendException{
String[] server=new String[]{"146.48.123.73","146.48.123.74" };
IClient client=new ServiceEngine(server, "rcirillo", "cnr", "private", "rcirillo");
// String localFile="/home/rcirillo/FilePerTest/CostaRica.jpg";
String remoteFile="/img/shared9.jpg";
String newFile="/home/rcirillo/FilePerTest/repl4.jpg";
client.get().LFile(newFile).RFile(remoteFile);
List<StorageObject> list=client.showDir().RDir("/img/");
for(StorageObject obj : list){
System.out.println("obj found: "+obj.getName());
}
String uri=client.getUrl().RFile(remoteFile);
System.out.println(" uri file: "+uri);
}
}

View File

@ -23,7 +23,7 @@ import com.mongodb.MongoException;
public abstract class TransportManager {
protected MemoryType memoryType;
/**
* This method specifies the type of the backend for dynamic loading
* For mongoDB, default backend, the name is MongoDB
@ -36,6 +36,7 @@ public abstract class TransportManager {
* @param server array that contains ip of backend server
* @param pass
* @param user
* @param token api token if is required by backend
*/
public abstract void initBackend(String[] server, String user, String pass, MemoryType memoryType, String[] dbNames, String writeConcern, String readConcern);
@ -155,10 +156,11 @@ public abstract class TransportManager {
/**
* get the size of the remote file
* @param bucket identifies the remote file path
* @param myFile the file wrapper
* @return the size of the remote file
* @throws UnknownHostException
*/
public abstract long getSize(String bucket);
public abstract long getSize(String bucket, MyFile myFile);
/**
* lock a remote file
@ -324,6 +326,8 @@ public abstract class TransportManager {
public abstract String getField(String remoteIdentifier, String fieldName) throws UnknownHostException ;
public abstract void close();
public abstract void forceClose();
public abstract void setFileProperty(String remotePath, String propertyField, String propertyValue);

View File

@ -4,10 +4,12 @@ package org.gcube.contentmanagement.blobstorage.transport;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Objects;
import java.util.ServiceLoader;
import org.gcube.contentmanagement.blobstorage.resource.MemoryType;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoOperationManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -25,9 +27,12 @@ public class TransportManagerFactory {
// private static final Logger logger = Logger.getLogger(OperationFactory.class);
final Logger logger = LoggerFactory.getLogger(TransportManagerFactory.class);
// TerrastoreClient client;
String[] server;
String user;
String password;
private String[] server;
private String user;
private String password;
private MemoryType memoryType;
private String dbNames;
TransportManager transport;
public TransportManagerFactory(String server[], String user, String password){
this.server=server;
@ -35,25 +40,37 @@ public class TransportManagerFactory {
this.password=password;
}
public TransportManager getTransport(String backendType, MemoryType memoryType, String[] dbNames, String writeConcern, String readConcern){
public TransportManager getTransport(TransportManager tm, String backendType, MemoryType memoryType, String[] dbNames, String writeConcern, String readConcern){
if (logger.isDebugEnabled()) {
logger.debug("getOperation(String) - start");
}
return load(backendType, memoryType, dbNames, writeConcern, readConcern);
if(logger.isDebugEnabled() && (!Objects.isNull(transport)))
logger.debug("transportLayer with "+transport.memoryType+" already instatiated. New memoryType request is "+memoryType);
// if we haven't any transport layer instantiated or the transport layer is istantiated on another memory type (persistent, volatile),
// then a new transport layer is needed
if(Objects.isNull(tm) || Objects.isNull(tm.memoryType) || (!tm.memoryType.equals(memoryType))) {
logger.info("new transport layer instantiated for "+memoryType+" memory");
return load(backendType, memoryType, dbNames, writeConcern, readConcern);
}else {
logger.debug("new transport layer not instantiated.");
}
return tm;
}
private TransportManager load(String backendType, MemoryType memoryType, String[] dbNames, String writeConcern, String readConcern){
ServiceLoader<TransportManager> loader = ServiceLoader.load(TransportManager.class);
Iterator<TransportManager> iterator = loader.iterator();
List<TransportManager> impls = new ArrayList<TransportManager>();
logger.info("Try to load the backend...");
logger.info("the specified backend passed as input param is "+backendType);
while(iterator.hasNext())
impls.add(iterator.next());
int implementationCounted=impls.size();
// System.out.println("size: "+implementationCounted);
if(implementationCounted==0){
if((implementationCounted==0) || backendType.equals(Costants.DEFAULT_TRANSPORT_MANAGER)){
logger.info(" 0 implementation found. Load default implementation of TransportManager");
return new MongoOperationManager(server, user, password, memoryType, dbNames, writeConcern, readConcern);
}else if(implementationCounted==1){
}else if((implementationCounted==1) && Objects.isNull(backendType)){
TransportManager tm = impls.get(0);
logger.info("1 implementation of TransportManager found. Load it. "+tm.getName());
tm.initBackend(server, user, password, memoryType, dbNames, writeConcern, readConcern);
@ -64,6 +81,7 @@ public class TransportManagerFactory {
for(TransportManager tm : impls){
if(tm.getName().equalsIgnoreCase(backendType)){
logger.info("Found implementation "+backendType);
tm.initBackend(server, user, password, memoryType, dbNames, writeConcern, readConcern);
return tm;
}
}

View File

@ -16,8 +16,6 @@ import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.OPERATION;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.REMOTE_RESOURCE;
import org.gcube.contentmanagement.blobstorage.service.impl.ServiceEngine;
import org.gcube.contentmanagement.blobstorage.service.operation.Operation;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.DateUtils;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.MongoInputStream;
@ -119,7 +117,7 @@ public class MongoIOManager {
logger.error("Problem to open the DB connection for gridfs file ");
throw new RemoteBackendException("Problem to open the DB connection: "+ e.getMessage());
}
logger.info("new mongo connection pool opened");
logger.info("mongo connection ready");
}
return db;
@ -310,6 +308,7 @@ public class MongoIOManager {
updateCommonFields(f, resource, OPERATION.REMOVE);
// check if the file is linked
if((f!=null) && (f.containsField(Costants.COUNT_IDENTIFIER)) && (f.get(Costants.COUNT_IDENTIFIER) != null)){
logger.debug("RemovingObject: the following object "+idToRemove+" contains a COUNT field");
// this field is only added for reporting tool: storage-manager-trigger
String filename=(String)f.get("filename");
f.put("onScope", filename);
@ -322,6 +321,7 @@ public class MongoIOManager {
// check if the file is a link
}else if((f.containsField(Costants.LINK_IDENTIFIER)) && (f.get(Costants.LINK_IDENTIFIER) != null )){
while((f!=null) && (f.containsField(Costants.LINK_IDENTIFIER)) && (f.get(Costants.LINK_IDENTIFIER) != null )){
logger.debug("RemovingObject: the following object "+idToRemove+" contains a LINK field");
// remove f and decrement linkCount field on linked object
String id=(String)f.get(Costants.LINK_IDENTIFIER);
GridFSDBFile fLink=findGFSCollectionObject(new ObjectId(id));
@ -547,10 +547,13 @@ public class MongoIOManager {
destinationFile.put("creationTime", DateUtils.now("dd MM yyyy 'at' hh:mm:ss z"));
}
public BasicDBObject setGenericMoveProperties(MyFile resource, String filename, String dir,
String name, BasicDBObject f) {
f.append("filename", filename).append("type", "file").append("name", name).append("dir", dir);
return f;
public DBObject setGenericMoveProperties(MyFile resource, String filename, String dir,
String name, DBObject sourcePathMetaCollection) {
sourcePathMetaCollection.put("filename", filename);
sourcePathMetaCollection.put("type", "file");
sourcePathMetaCollection.put("name", name);
sourcePathMetaCollection.put("dir", dir);
return sourcePathMetaCollection;
}
@ -666,6 +669,10 @@ public class MongoIOManager {
f=null;
}
}
if (f==null) {
logger.warn("The objectID is not present. Going to abort the current operation");
throw new RemoteBackendException("Object id "+serverLocation+" not found.");
}
// if the remote identifier is not a specified as ID, try to check if it is a valid remote path
// in this case the remote identifier is a valid objectID but it indicates a path
}else if ((remoteResourceIdentifier != null) && (!(remoteResourceIdentifier.equals(REMOTE_RESOURCE.ID))) && (f==null)){
@ -778,10 +785,10 @@ public class MongoIOManager {
return list;
}
public BasicDBObject findMetaCollectionObject(String source) throws UnknownHostException {
public DBObject findMetaCollectionObject(String source) throws UnknownHostException {
DBCollection fileCollection=getConnectionDB(dbName, false).getCollection(Costants.DEFAULT_META_COLLECTION);
BasicDBObject query = new BasicDBObject();
BasicDBObject obj=null;
DBObject obj=null;
query.put( "filename" ,source);
DBCursor cursor=fileCollection.find(query);
if(cursor != null && !cursor.hasNext()){
@ -790,7 +797,7 @@ public class MongoIOManager {
cursor=fileCollection.find(query);
}
if(cursor.hasNext()){
obj=(BasicDBObject) cursor.next();
obj=(DBObject) cursor.next();
String path=(String)obj.get("filename");
logger.debug("path found "+path);
}
@ -1048,11 +1055,11 @@ public class MongoIOManager {
* the old close method
*/
protected void clean() {
if(mongo!=null)
mongo.close();
mongo=null;
if(db!=null)
db=null;
// if(mongo!=null)
// mongo.close();
// mongo=null;
// if(db!=null)
// db=null;
}
/**
@ -1062,14 +1069,24 @@ public class MongoIOManager {
*/
public void close() {
if(mongo!=null)
mongo.close();
logger.info("Mongo has been closed");
mongo=null;
// if(mongo!=null)
// mongo.close();
logger.debug(" cleaning mongo objects");
// logger.info("Mongo has been closed");
// mongo=null;
gfs=null;
db=null;
}
public void forceClose() {
if(mongo!=null)
mongo.close();
logger.info("Mongo pool closed");
close();
mongo=null;
}
public void removeGFSFile(GridFSDBFile f, ObjectId idF){
// this field is an advice for oplog collection reader
f.put("onDeleting", "true");

View File

@ -1,4 +1,4 @@
package org.gcube.contentmanagement.blobstorage.transport.backend;
package org.gcube.contentmanagement.blobstorage.transport.backend;
import org.bson.types.ObjectId;
@ -11,11 +11,11 @@ import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import org.gcube.contentmanagement.blobstorage.resource.MemoryType;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition;
import org.gcube.contentmanagement.blobstorage.service.impl.ServiceEngine;
import org.gcube.contentmanagement.blobstorage.service.operation.*;
import org.gcube.contentmanagement.blobstorage.transport.TransportManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
@ -26,6 +26,7 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.mongodb.BasicDBObject;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.MongoException;
import com.mongodb.gridfs.GridFS;
import com.mongodb.gridfs.GridFSDBFile;
@ -55,8 +56,10 @@ public class MongoOperationManager extends TransportManager{
@Override
public void initBackend(String[] server, String user, String pass, MemoryType memoryType , String[] dbNames, String writeConcern, String readConcern) {
logger.debug("init storage backend with "+memoryType+" memory");
try {
this.memoryType=memoryType;
super.memoryType=memoryType;
MongoOperationManager.dbNames=dbNames;
logger.debug("check mongo configuration");
if (dbNames!=null){
@ -130,6 +133,13 @@ public class MongoOperationManager extends TransportManager{
// mongoSecondaryInstance.close();
}
public void forceClose() {
if(Objects.nonNull(mongoPrimaryInstance))
mongoPrimaryInstance.forceClose();
if(Objects.nonNull(mongoSecondaryInstance))
mongoSecondaryInstance.forceClose();
}
/**
* Unlock the object specified, this method accept the key field for the unlock operation
* @throws FileNotFoundException
@ -290,7 +300,7 @@ public class MongoOperationManager extends TransportManager{
}
@Override
public long getSize(String remotePath){
public long getSize(String remotePath, MyFile file){
long length=-1;
if(logger.isDebugEnabled())
logger.debug("MongoDB - get Size for pathServer: "+remotePath);
@ -456,7 +466,7 @@ public class MongoOperationManager extends TransportManager{
*/
private void updateMetaObject(String remoteIdentifier, String propertyField, String propertyValue)
throws UnknownHostException {
BasicDBObject remoteMetaCollectionObject;
DBObject remoteMetaCollectionObject;
logger.debug("find object...");
remoteMetaCollectionObject = mongoPrimaryInstance.findMetaCollectionObject(remoteIdentifier);
if(remoteMetaCollectionObject!=null){

View File

@ -10,7 +10,6 @@ import java.util.List;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.OPERATION;
import org.gcube.contentmanagement.blobstorage.service.impl.ServiceEngine;
import org.gcube.contentmanagement.blobstorage.service.operation.CopyDir;
import org.gcube.contentmanagement.blobstorage.service.operation.Monitor;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoOperationManager;

View File

@ -12,7 +12,6 @@ import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.OPER
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.REMOTE_RESOURCE;
import org.gcube.contentmanagement.blobstorage.service.operation.Link;
import org.gcube.contentmanagement.blobstorage.service.operation.Monitor;
import org.gcube.contentmanagement.blobstorage.service.operation.Operation;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.util.Costants;
import org.slf4j.Logger;

View File

@ -8,7 +8,6 @@ import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.OPERATION;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.REMOTE_RESOURCE;
import org.gcube.contentmanagement.blobstorage.service.operation.Download;
import org.gcube.contentmanagement.blobstorage.service.operation.Lock;
import org.gcube.contentmanagement.blobstorage.service.operation.Monitor;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;

View File

@ -11,7 +11,6 @@ import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MemoryType;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.OPERATION;
import org.gcube.contentmanagement.blobstorage.service.impl.ServiceEngine;
import org.gcube.contentmanagement.blobstorage.service.operation.Monitor;
import org.gcube.contentmanagement.blobstorage.service.operation.MoveDir;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoOperationManager;

View File

@ -9,7 +9,6 @@ import java.net.UnknownHostException;
import org.gcube.contentmanagement.blobstorage.resource.MemoryType;
import org.gcube.contentmanagement.blobstorage.resource.MyFile;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.OPERATION;
import org.gcube.contentmanagement.blobstorage.service.impl.ServiceEngine;
import org.gcube.contentmanagement.blobstorage.service.operation.Monitor;
import org.gcube.contentmanagement.blobstorage.service.operation.Move;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoOperationManager;
@ -65,7 +64,7 @@ public class MoveOperator extends Move {
logger.info("move operation on Mongo backend, parameters: source path: "+source+" destination path: "+destination);
logger.debug("MOVE OPERATION operation defined: "+resource.getOperationDefinition().getOperation());
if((source != null) && (!source.isEmpty()) && (destination != null) && (!destination.isEmpty())){
BasicDBObject sourcePathMetaCollection = mongoPrimaryInstance.findMetaCollectionObject(source);
DBObject sourcePathMetaCollection = mongoPrimaryInstance.findMetaCollectionObject(source);
//check if the file exist in the destination path, if it exist then it will be deleted
if(sourcePathMetaCollection != null){
sourceId=sourcePathMetaCollection.get("_id").toString();
@ -175,7 +174,7 @@ public class MoveOperator extends Move {
}
private BasicDBObject setCommonFields(BasicDBObject f, MyFile resource, OPERATION op) {
private DBObject setCommonFields(DBObject sourcePathMetaCollection, MyFile resource, OPERATION op) {
String owner=resource.getOwner();
if(op == null){
op=resource.getOperationDefinition().getOperation();
@ -188,14 +187,23 @@ public class MoveOperator extends Move {
String address=null;
try {
address=InetAddress.getLocalHost().getCanonicalHostName().toString();
f.put("callerIP", address);
sourcePathMetaCollection.put("callerIP", address);
} catch (UnknownHostException e) { }
if(from == null)
f.append("lastAccess", DateUtils.now("dd MM yyyy 'at' hh:mm:ss z")).append("lastUser", owner).append("lastOperation", op.toString()).append("callerIP", address);
else
f.append("lastAccess", DateUtils.now("dd MM yyyy 'at' hh:mm:ss z")).append("lastUser", owner).append("lastOperation", op.toString()).append("callerIP", address).append("from", from);
return f;
if(from == null) {
sourcePathMetaCollection.put("lastAccess", DateUtils.now("dd MM yyyy 'at' hh:mm:ss z"));
sourcePathMetaCollection.put("lastUser", owner);
sourcePathMetaCollection.put("lastOperation", op.toString());
sourcePathMetaCollection.put("callerIP", address);
}else {
sourcePathMetaCollection.put("lastAccess", DateUtils.now("dd MM yyyy 'at' hh:mm:ss z"));
sourcePathMetaCollection.put("lastUser", owner);
sourcePathMetaCollection.put("lastOperation", op.toString());
sourcePathMetaCollection.put("callerIP", address);
sourcePathMetaCollection.put("from", from);
}
return sourcePathMetaCollection;
}
}

View File

@ -5,6 +5,7 @@ package org.gcube.contentmanagement.blobstorage.transport.backend.operation;
import java.io.InputStream;
import java.net.UnknownHostException;
import java.util.Objects;
import org.bson.types.ObjectId;
import org.gcube.contentmanagement.blobstorage.resource.MemoryType;
@ -13,7 +14,6 @@ import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.LOCA
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.OPERATION;
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.REMOTE_RESOURCE;
import org.gcube.contentmanagement.blobstorage.service.operation.Monitor;
import org.gcube.contentmanagement.blobstorage.service.operation.Operation;
import org.gcube.contentmanagement.blobstorage.service.operation.SoftCopy;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoOperationManager;
@ -26,6 +26,7 @@ import org.slf4j.LoggerFactory;
import com.mongodb.BasicDBObject;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.DuplicateKeyException;
import com.mongodb.gridfs.GridFSDBFile;
/**
@ -85,6 +86,7 @@ public class SoftCopyOperator extends SoftCopy {
// if it contains a link field, then I'm going to retrieve the related payload
sourceObject = mongoPrimaryInstance.retrieveLinkPayload(sourceObject);
ObjectId sourceId=(ObjectId)sourceObject.getId();
logger.debug("source id is "+sourceId);
InputStream is= sourceObject.getInputStream();
resource.setInputStream(is);
GridFSDBFile dest = null;
@ -103,11 +105,18 @@ public class SoftCopyOperator extends SoftCopy {
ObjectId removedId=null;
// if the destination location is not empty
if (dest != null){
String destId=dest.getId().toString();
logger.debug("destination id is "+destId);
// in this case the source and dest are the same object
if(sourceId.toString().equals(destId)) {
logger.info("source and destination are pointing to the same object. The copy operation will have no effects");
return destId;
}
// remove the destination file. The third parameter to true replace the file otherwise the remote id is returned
if(resource.isReplace()){
removedId = mongoPrimaryInstance.removeFile(resource, null, resource.isReplace(), null, dest);
}else{
return dest.getId().toString();
return destId;
}
}
// get metacollection instance
@ -117,7 +126,7 @@ public class SoftCopyOperator extends SoftCopy {
ObjectId md5Id=getDuplicatesMap(md5);
// check if the source object is already a map
if(isMap(sourceObject)){
logger.debug("the sourceObject with the following id: "+mapId+" is already a map");
logger.debug("the sourceObject with the following id: "+sourceId+" is already a map");
mapId=sourceId;
// then it's needed to add only the destObject to the map
//first: create link object to destination place
@ -208,10 +217,10 @@ public class SoftCopyOperator extends SoftCopy {
ObjectId id=null;
if(newId == null){
id=new ObjectId();
logger.debug("generated id for new object link"+id);
logger.debug("generated id for new object link "+id);
}else{
id=newId;
logger.debug("restored id for new object link"+id);
logger.debug("restored id for new object link "+id);
}
document.put("_id", id);
@ -225,8 +234,20 @@ public class SoftCopyOperator extends SoftCopy {
document.put("length", sourceObject.getLength());
// set chunkSize inherited from original object
document.put("chunkSize", sourceObject.getChunkSize());
metaCollectionInstance.insert(document);
metaCollectionInstance.save(document);
try {
metaCollectionInstance.insert(document);
metaCollectionInstance.save(document);
}catch (DuplicateKeyException e) {
logger.warn("key already present or not completely removed. Wait few seconds and retry");
try {
Thread.sleep(2000);
} catch (InterruptedException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
metaCollectionInstance.insert(document);
metaCollectionInstance.save(document);
}
return document;
}
@ -253,11 +274,21 @@ public class SoftCopyOperator extends SoftCopy {
searchQuery.put("_id" ,mapId);
DBObject mapObject=mongoPrimaryInstance.findCollectionObject(metaCollectionInstance, searchQuery);
// BasicDBObject updateObject= new BasicDBObject().append("$inc",new BasicDBObject().append("count", 1));;
int count=(int)mapObject.get("count");
count++;
mapObject.put("count", count);
// metaCollectionInstance.update(mapObject, updateObject);
metaCollectionInstance.save(mapObject);
if(!Objects.isNull(mapObject)) {
Object counting=mapObject.get("count");
if(Objects.nonNull(counting)) {
int count=(int)counting;
count++;
mapObject.put("count", count);
}else {
mapObject.put("count", 1);
}
// metaCollectionInstance.update(mapObject, updateObject);
metaCollectionInstance.save(mapObject);
}else {
logger.error("no object found associated to the following id: "+mapId);
}
}
private ObjectId getDuplicatesMap(String md5){
@ -271,8 +302,11 @@ public class SoftCopyOperator extends SoftCopy {
*/
private boolean isMap(GridFSDBFile sourceObject) {
String type=sourceObject.get("type").toString();
if(type.equals("map"))
logger.debug("object type: "+type);
if(type.equals("map")) {
logger.debug("sourceFile is a map: "+sourceObject.toString());
return true;
}
return false;
}

View File

@ -11,7 +11,6 @@ import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.OPER
import org.gcube.contentmanagement.blobstorage.resource.OperationDefinition.REMOTE_RESOURCE;
import org.gcube.contentmanagement.blobstorage.service.operation.Monitor;
import org.gcube.contentmanagement.blobstorage.service.operation.Unlock;
import org.gcube.contentmanagement.blobstorage.service.operation.Upload;
import org.gcube.contentmanagement.blobstorage.transport.backend.MongoIOManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

View File

@ -31,8 +31,8 @@ public class MongoInputStream extends ProxyInputStream{
} catch (IOException e) {
e.printStackTrace();
}
if (mongo!=null)
mongo.close();
// if (mongo!=null)
// mongo.close();
setClosed(true);
}
}

View File

@ -66,7 +66,7 @@ public class MongoOutputStream extends ProxyOutputStream {
// TODO Auto-generated catch block
e.printStackTrace();
}
mongo.close();
// mongo.close();
setClosed(true);
}
}

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<Resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ID />
<Type>Service</Type>
<Profile>
<Description>${description}</Description>
<Class>ContentManagement</Class>
<Name>storage-manager-core</Name>
<Version>1.0.0</Version>
<Packages>
<Software>
<Name>storage-manager-core</Name>
<Version>2.9.0-SNAPSHOT</Version>
<MavenCoordinates>
<groupId>org.gcube.contentmanagement</groupId>
<artifactId>storage-manager-core</artifactId>
<version>2.9.0-SNAPSHOT</version>
</MavenCoordinates>
<Files>
<File>storage-manager-core-2.9.0-SNAPSHOT.jar</File>
</Files>
</Software>
</Packages>
</Profile>
</Resource>